Test Report: KVM_Linux_crio 18333

                    
                      35bb0a6fdb2e8bad0653ad48b3d817d653ac2a3a:2024-03-08:33467
                    
                

Test fail (31/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 150.84
53 TestAddons/StoppedEnableDisable 154.53
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.76
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
172 TestMutliControlPlane/serial/StopSecondaryNode 142.08
174 TestMutliControlPlane/serial/RestartSecondaryNode 56.37
176 TestMutliControlPlane/serial/RestartClusterKeepsNodes 375.58
179 TestMutliControlPlane/serial/StopCluster 141.98
239 TestMultiNode/serial/RestartKeepsNodes 313.62
241 TestMultiNode/serial/StopMultiNode 141.43
248 TestPreload 181.07
256 TestKubernetesUpgrade 401.81
273 TestPause/serial/SecondStartNoReconfiguration 66.78
293 TestStartStop/group/old-k8s-version/serial/FirstStart 281.97
302 TestStartStop/group/no-preload/serial/Stop 139.02
305 TestStartStop/group/embed-certs/serial/Stop 139.1
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
309 TestStartStop/group/old-k8s-version/serial/DeployApp 0.55
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 89.66
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/old-k8s-version/serial/SecondStart 768.42
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.21
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.25
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.19
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.38
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 501.17
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 359.15
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 249.54
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 115.91
x
+
TestAddons/parallel/Ingress (150.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-963897 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-963897 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-963897 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bf76ad91-e4c0-4d06-b04c-597192b9dea0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bf76ad91-e4c0-4d06-b04c-597192b9dea0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.006181108s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-963897 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.103952036s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-963897 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.212
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-963897 addons disable ingress-dns --alsologtostderr -v=1: (1.733388368s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-963897 addons disable ingress --alsologtostderr -v=1: (7.771797123s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-963897 -n addons-963897
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-963897 logs -n 25: (1.372850104s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-219734                                                                     | download-only-219734 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| delete  | -p download-only-029776                                                                     | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| delete  | -p download-only-925127                                                                     | download-only-925127 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| delete  | -p download-only-219734                                                                     | download-only-219734 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-920537 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC |                     |
	|         | binary-mirror-920537                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33887                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-920537                                                                     | binary-mirror-920537 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC |                     |
	|         | addons-963897                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC |                     |
	|         | addons-963897                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-963897 --wait=true                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-963897 ssh cat                                                                       | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | /opt/local-path-provisioner/pvc-23f464d9-185e-46fe-9762-6116259b684b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-963897 addons disable                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | -p addons-963897                                                                            |                      |         |         |                     |                     |
	| ip      | addons-963897 ip                                                                            | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	| addons  | addons-963897 addons disable                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | -p addons-963897                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | addons-963897                                                                               |                      |         |         |                     |                     |
	| addons  | addons-963897 addons                                                                        | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:58 UTC | 08 Mar 24 02:58 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:59 UTC | 08 Mar 24 02:59 UTC |
	|         | addons-963897                                                                               |                      |         |         |                     |                     |
	| addons  | addons-963897 addons disable                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:59 UTC | 08 Mar 24 02:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-963897 ssh curl -s                                                                   | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-963897 addons                                                                        | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:59 UTC | 08 Mar 24 02:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-963897 addons                                                                        | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 02:59 UTC | 08 Mar 24 02:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-963897 ip                                                                            | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 03:01 UTC | 08 Mar 24 03:01 UTC |
	| addons  | addons-963897 addons disable                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 03:01 UTC | 08 Mar 24 03:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-963897 addons disable                                                                | addons-963897        | jenkins | v1.32.0 | 08 Mar 24 03:01 UTC | 08 Mar 24 03:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:56:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:56:06.724690  919714 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:56:06.724948  919714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:56:06.724960  919714 out.go:304] Setting ErrFile to fd 2...
	I0308 02:56:06.724964  919714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:56:06.725164  919714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 02:56:06.725829  919714 out.go:298] Setting JSON to false
	I0308 02:56:06.726764  919714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":23893,"bootTime":1709842674,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:56:06.726836  919714 start.go:139] virtualization: kvm guest
	I0308 02:56:06.729025  919714 out.go:177] * [addons-963897] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:56:06.730579  919714 notify.go:220] Checking for updates...
	I0308 02:56:06.730608  919714 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 02:56:06.732154  919714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:56:06.733839  919714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 02:56:06.735393  919714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:56:06.736726  919714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 02:56:06.738110  919714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 02:56:06.739515  919714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:56:06.769516  919714 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 02:56:06.770805  919714 start.go:297] selected driver: kvm2
	I0308 02:56:06.770820  919714 start.go:901] validating driver "kvm2" against <nil>
	I0308 02:56:06.770835  919714 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 02:56:06.771484  919714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:56:06.771582  919714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 02:56:06.785890  919714 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 02:56:06.785935  919714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:56:06.786150  919714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:56:06.786232  919714 cni.go:84] Creating CNI manager for ""
	I0308 02:56:06.786259  919714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 02:56:06.786268  919714 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 02:56:06.786337  919714 start.go:340] cluster config:
	{Name:addons-963897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:56:06.786481  919714 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:56:06.788211  919714 out.go:177] * Starting "addons-963897" primary control-plane node in "addons-963897" cluster
	I0308 02:56:06.789478  919714 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:56:06.789514  919714 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 02:56:06.789528  919714 cache.go:56] Caching tarball of preloaded images
	I0308 02:56:06.789612  919714 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 02:56:06.789627  919714 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 02:56:06.790023  919714 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/config.json ...
	I0308 02:56:06.790050  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/config.json: {Name:mkf2cf6d758ad8d1283d4c937889b21b965996bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:06.790194  919714 start.go:360] acquireMachinesLock for addons-963897: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 02:56:06.790249  919714 start.go:364] duration metric: took 38.471µs to acquireMachinesLock for "addons-963897"
	I0308 02:56:06.790267  919714 start.go:93] Provisioning new machine with config: &{Name:addons-963897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-963897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:56:06.790328  919714 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 02:56:06.791962  919714 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0308 02:56:06.792089  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:56:06.792136  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:56:06.805495  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46361
	I0308 02:56:06.806044  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:56:06.806738  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:56:06.806773  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:56:06.807192  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:56:06.807390  919714 main.go:141] libmachine: (addons-963897) Calling .GetMachineName
	I0308 02:56:06.807545  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:06.807718  919714 start.go:159] libmachine.API.Create for "addons-963897" (driver="kvm2")
	I0308 02:56:06.807753  919714 client.go:168] LocalClient.Create starting
	I0308 02:56:06.807808  919714 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 02:56:06.850065  919714 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 02:56:07.019119  919714 main.go:141] libmachine: Running pre-create checks...
	I0308 02:56:07.019153  919714 main.go:141] libmachine: (addons-963897) Calling .PreCreateCheck
	I0308 02:56:07.019684  919714 main.go:141] libmachine: (addons-963897) Calling .GetConfigRaw
	I0308 02:56:07.020166  919714 main.go:141] libmachine: Creating machine...
	I0308 02:56:07.020183  919714 main.go:141] libmachine: (addons-963897) Calling .Create
	I0308 02:56:07.020350  919714 main.go:141] libmachine: (addons-963897) Creating KVM machine...
	I0308 02:56:07.021639  919714 main.go:141] libmachine: (addons-963897) DBG | found existing default KVM network
	I0308 02:56:07.022470  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:07.022331  919736 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0308 02:56:07.022506  919714 main.go:141] libmachine: (addons-963897) DBG | created network xml: 
	I0308 02:56:07.022529  919714 main.go:141] libmachine: (addons-963897) DBG | <network>
	I0308 02:56:07.022546  919714 main.go:141] libmachine: (addons-963897) DBG |   <name>mk-addons-963897</name>
	I0308 02:56:07.022559  919714 main.go:141] libmachine: (addons-963897) DBG |   <dns enable='no'/>
	I0308 02:56:07.022572  919714 main.go:141] libmachine: (addons-963897) DBG |   
	I0308 02:56:07.022580  919714 main.go:141] libmachine: (addons-963897) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 02:56:07.022587  919714 main.go:141] libmachine: (addons-963897) DBG |     <dhcp>
	I0308 02:56:07.022599  919714 main.go:141] libmachine: (addons-963897) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 02:56:07.022609  919714 main.go:141] libmachine: (addons-963897) DBG |     </dhcp>
	I0308 02:56:07.022617  919714 main.go:141] libmachine: (addons-963897) DBG |   </ip>
	I0308 02:56:07.022631  919714 main.go:141] libmachine: (addons-963897) DBG |   
	I0308 02:56:07.022639  919714 main.go:141] libmachine: (addons-963897) DBG | </network>
	I0308 02:56:07.022650  919714 main.go:141] libmachine: (addons-963897) DBG | 
	I0308 02:56:07.028018  919714 main.go:141] libmachine: (addons-963897) DBG | trying to create private KVM network mk-addons-963897 192.168.39.0/24...
	I0308 02:56:07.094095  919714 main.go:141] libmachine: (addons-963897) DBG | private KVM network mk-addons-963897 192.168.39.0/24 created
	I0308 02:56:07.094151  919714 main.go:141] libmachine: (addons-963897) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897 ...
	I0308 02:56:07.094182  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:07.094112  919736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:56:07.094205  919714 main.go:141] libmachine: (addons-963897) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 02:56:07.094318  919714 main.go:141] libmachine: (addons-963897) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 02:56:07.337232  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:07.337087  919736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa...
	I0308 02:56:07.711403  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:07.711255  919736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/addons-963897.rawdisk...
	I0308 02:56:07.711434  919714 main.go:141] libmachine: (addons-963897) DBG | Writing magic tar header
	I0308 02:56:07.711448  919714 main.go:141] libmachine: (addons-963897) DBG | Writing SSH key tar header
	I0308 02:56:07.711458  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:07.711400  919736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897 ...
	I0308 02:56:07.711482  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897
	I0308 02:56:07.711510  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897 (perms=drwx------)
	I0308 02:56:07.711538  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 02:56:07.711549  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 02:56:07.711566  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 02:56:07.711584  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:56:07.711598  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 02:56:07.711612  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 02:56:07.711624  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 02:56:07.711642  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 02:56:07.711721  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home/jenkins
	I0308 02:56:07.711759  919714 main.go:141] libmachine: (addons-963897) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 02:56:07.711771  919714 main.go:141] libmachine: (addons-963897) DBG | Checking permissions on dir: /home
	I0308 02:56:07.711790  919714 main.go:141] libmachine: (addons-963897) DBG | Skipping /home - not owner
	I0308 02:56:07.711805  919714 main.go:141] libmachine: (addons-963897) Creating domain...
	I0308 02:56:07.713017  919714 main.go:141] libmachine: (addons-963897) define libvirt domain using xml: 
	I0308 02:56:07.713045  919714 main.go:141] libmachine: (addons-963897) <domain type='kvm'>
	I0308 02:56:07.713052  919714 main.go:141] libmachine: (addons-963897)   <name>addons-963897</name>
	I0308 02:56:07.713062  919714 main.go:141] libmachine: (addons-963897)   <memory unit='MiB'>4000</memory>
	I0308 02:56:07.713068  919714 main.go:141] libmachine: (addons-963897)   <vcpu>2</vcpu>
	I0308 02:56:07.713074  919714 main.go:141] libmachine: (addons-963897)   <features>
	I0308 02:56:07.713082  919714 main.go:141] libmachine: (addons-963897)     <acpi/>
	I0308 02:56:07.713090  919714 main.go:141] libmachine: (addons-963897)     <apic/>
	I0308 02:56:07.713098  919714 main.go:141] libmachine: (addons-963897)     <pae/>
	I0308 02:56:07.713111  919714 main.go:141] libmachine: (addons-963897)     
	I0308 02:56:07.713117  919714 main.go:141] libmachine: (addons-963897)   </features>
	I0308 02:56:07.713122  919714 main.go:141] libmachine: (addons-963897)   <cpu mode='host-passthrough'>
	I0308 02:56:07.713149  919714 main.go:141] libmachine: (addons-963897)   
	I0308 02:56:07.713185  919714 main.go:141] libmachine: (addons-963897)   </cpu>
	I0308 02:56:07.713200  919714 main.go:141] libmachine: (addons-963897)   <os>
	I0308 02:56:07.713209  919714 main.go:141] libmachine: (addons-963897)     <type>hvm</type>
	I0308 02:56:07.713219  919714 main.go:141] libmachine: (addons-963897)     <boot dev='cdrom'/>
	I0308 02:56:07.713228  919714 main.go:141] libmachine: (addons-963897)     <boot dev='hd'/>
	I0308 02:56:07.713235  919714 main.go:141] libmachine: (addons-963897)     <bootmenu enable='no'/>
	I0308 02:56:07.713242  919714 main.go:141] libmachine: (addons-963897)   </os>
	I0308 02:56:07.713304  919714 main.go:141] libmachine: (addons-963897)   <devices>
	I0308 02:56:07.713333  919714 main.go:141] libmachine: (addons-963897)     <disk type='file' device='cdrom'>
	I0308 02:56:07.713349  919714 main.go:141] libmachine: (addons-963897)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/boot2docker.iso'/>
	I0308 02:56:07.713362  919714 main.go:141] libmachine: (addons-963897)       <target dev='hdc' bus='scsi'/>
	I0308 02:56:07.713381  919714 main.go:141] libmachine: (addons-963897)       <readonly/>
	I0308 02:56:07.713399  919714 main.go:141] libmachine: (addons-963897)     </disk>
	I0308 02:56:07.713412  919714 main.go:141] libmachine: (addons-963897)     <disk type='file' device='disk'>
	I0308 02:56:07.713426  919714 main.go:141] libmachine: (addons-963897)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 02:56:07.713446  919714 main.go:141] libmachine: (addons-963897)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/addons-963897.rawdisk'/>
	I0308 02:56:07.713457  919714 main.go:141] libmachine: (addons-963897)       <target dev='hda' bus='virtio'/>
	I0308 02:56:07.713469  919714 main.go:141] libmachine: (addons-963897)     </disk>
	I0308 02:56:07.713479  919714 main.go:141] libmachine: (addons-963897)     <interface type='network'>
	I0308 02:56:07.713485  919714 main.go:141] libmachine: (addons-963897)       <source network='mk-addons-963897'/>
	I0308 02:56:07.713493  919714 main.go:141] libmachine: (addons-963897)       <model type='virtio'/>
	I0308 02:56:07.713498  919714 main.go:141] libmachine: (addons-963897)     </interface>
	I0308 02:56:07.713505  919714 main.go:141] libmachine: (addons-963897)     <interface type='network'>
	I0308 02:56:07.713511  919714 main.go:141] libmachine: (addons-963897)       <source network='default'/>
	I0308 02:56:07.713518  919714 main.go:141] libmachine: (addons-963897)       <model type='virtio'/>
	I0308 02:56:07.713524  919714 main.go:141] libmachine: (addons-963897)     </interface>
	I0308 02:56:07.713531  919714 main.go:141] libmachine: (addons-963897)     <serial type='pty'>
	I0308 02:56:07.713540  919714 main.go:141] libmachine: (addons-963897)       <target port='0'/>
	I0308 02:56:07.713548  919714 main.go:141] libmachine: (addons-963897)     </serial>
	I0308 02:56:07.713555  919714 main.go:141] libmachine: (addons-963897)     <console type='pty'>
	I0308 02:56:07.713563  919714 main.go:141] libmachine: (addons-963897)       <target type='serial' port='0'/>
	I0308 02:56:07.713568  919714 main.go:141] libmachine: (addons-963897)     </console>
	I0308 02:56:07.713575  919714 main.go:141] libmachine: (addons-963897)     <rng model='virtio'>
	I0308 02:56:07.713581  919714 main.go:141] libmachine: (addons-963897)       <backend model='random'>/dev/random</backend>
	I0308 02:56:07.713592  919714 main.go:141] libmachine: (addons-963897)     </rng>
	I0308 02:56:07.713628  919714 main.go:141] libmachine: (addons-963897)     
	I0308 02:56:07.713651  919714 main.go:141] libmachine: (addons-963897)     
	I0308 02:56:07.713662  919714 main.go:141] libmachine: (addons-963897)   </devices>
	I0308 02:56:07.713671  919714 main.go:141] libmachine: (addons-963897) </domain>
	I0308 02:56:07.713682  919714 main.go:141] libmachine: (addons-963897) 
	I0308 02:56:07.718532  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:97:94:e9 in network default
	I0308 02:56:07.719132  919714 main.go:141] libmachine: (addons-963897) Ensuring networks are active...
	I0308 02:56:07.719172  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:07.719911  919714 main.go:141] libmachine: (addons-963897) Ensuring network default is active
	I0308 02:56:07.720234  919714 main.go:141] libmachine: (addons-963897) Ensuring network mk-addons-963897 is active
	I0308 02:56:07.720786  919714 main.go:141] libmachine: (addons-963897) Getting domain xml...
	I0308 02:56:07.721716  919714 main.go:141] libmachine: (addons-963897) Creating domain...
	I0308 02:56:08.892064  919714 main.go:141] libmachine: (addons-963897) Waiting to get IP...
	I0308 02:56:08.893080  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:08.893563  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:08.893616  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:08.893542  919736 retry.go:31] will retry after 204.625488ms: waiting for machine to come up
	I0308 02:56:09.099998  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:09.100428  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:09.100461  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:09.100369  919736 retry.go:31] will retry after 298.761154ms: waiting for machine to come up
	I0308 02:56:09.400879  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:09.401293  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:09.401346  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:09.401250  919736 retry.go:31] will retry after 486.216046ms: waiting for machine to come up
	I0308 02:56:09.888919  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:09.889478  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:09.889503  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:09.889436  919736 retry.go:31] will retry after 412.246476ms: waiting for machine to come up
	I0308 02:56:10.302983  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:10.303509  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:10.303553  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:10.303464  919736 retry.go:31] will retry after 649.074607ms: waiting for machine to come up
	I0308 02:56:10.954361  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:10.954748  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:10.954782  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:10.954696  919736 retry.go:31] will retry after 844.030243ms: waiting for machine to come up
	I0308 02:56:11.800552  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:11.801022  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:11.801052  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:11.800969  919736 retry.go:31] will retry after 1.110105809s: waiting for machine to come up
	I0308 02:56:12.912540  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:12.912922  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:12.912951  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:12.912880  919736 retry.go:31] will retry after 1.079376895s: waiting for machine to come up
	I0308 02:56:13.994200  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:13.994566  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:13.994596  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:13.994516  919736 retry.go:31] will retry after 1.198489918s: waiting for machine to come up
	I0308 02:56:15.194444  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:15.194922  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:15.194960  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:15.194831  919736 retry.go:31] will retry after 2.297335391s: waiting for machine to come up
	I0308 02:56:17.493351  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:17.493772  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:17.493809  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:17.493710  919736 retry.go:31] will retry after 1.855102029s: waiting for machine to come up
	I0308 02:56:19.350008  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:19.350434  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:19.350472  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:19.350375  919736 retry.go:31] will retry after 3.173639928s: waiting for machine to come up
	I0308 02:56:22.525096  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:22.525566  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:22.525595  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:22.525515  919736 retry.go:31] will retry after 3.465244127s: waiting for machine to come up
	I0308 02:56:25.991923  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:25.992371  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find current IP address of domain addons-963897 in network mk-addons-963897
	I0308 02:56:25.992397  919714 main.go:141] libmachine: (addons-963897) DBG | I0308 02:56:25.992317  919736 retry.go:31] will retry after 4.533193383s: waiting for machine to come up
	I0308 02:56:30.528869  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:30.529306  919714 main.go:141] libmachine: (addons-963897) Found IP for machine: 192.168.39.212
	I0308 02:56:30.529334  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has current primary IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:30.529340  919714 main.go:141] libmachine: (addons-963897) Reserving static IP address...
	I0308 02:56:30.529775  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find host DHCP lease matching {name: "addons-963897", mac: "52:54:00:4c:9d:15", ip: "192.168.39.212"} in network mk-addons-963897
	I0308 02:56:30.605134  919714 main.go:141] libmachine: (addons-963897) DBG | Getting to WaitForSSH function...
	I0308 02:56:30.605194  919714 main.go:141] libmachine: (addons-963897) Reserved static IP address: 192.168.39.212
	I0308 02:56:30.605208  919714 main.go:141] libmachine: (addons-963897) Waiting for SSH to be available...
	I0308 02:56:30.607815  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:30.608207  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897
	I0308 02:56:30.608239  919714 main.go:141] libmachine: (addons-963897) DBG | unable to find defined IP address of network mk-addons-963897 interface with MAC address 52:54:00:4c:9d:15
	I0308 02:56:30.608340  919714 main.go:141] libmachine: (addons-963897) DBG | Using SSH client type: external
	I0308 02:56:30.608367  919714 main.go:141] libmachine: (addons-963897) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa (-rw-------)
	I0308 02:56:30.608400  919714 main.go:141] libmachine: (addons-963897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 02:56:30.608418  919714 main.go:141] libmachine: (addons-963897) DBG | About to run SSH command:
	I0308 02:56:30.608435  919714 main.go:141] libmachine: (addons-963897) DBG | exit 0
	I0308 02:56:30.612237  919714 main.go:141] libmachine: (addons-963897) DBG | SSH cmd err, output: exit status 255: 
	I0308 02:56:30.612258  919714 main.go:141] libmachine: (addons-963897) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0308 02:56:30.612272  919714 main.go:141] libmachine: (addons-963897) DBG | command : exit 0
	I0308 02:56:30.612285  919714 main.go:141] libmachine: (addons-963897) DBG | err     : exit status 255
	I0308 02:56:30.612326  919714 main.go:141] libmachine: (addons-963897) DBG | output  : 
	I0308 02:56:33.614070  919714 main.go:141] libmachine: (addons-963897) DBG | Getting to WaitForSSH function...
	I0308 02:56:33.616698  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.617203  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:33.617224  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.617417  919714 main.go:141] libmachine: (addons-963897) DBG | Using SSH client type: external
	I0308 02:56:33.617455  919714 main.go:141] libmachine: (addons-963897) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa (-rw-------)
	I0308 02:56:33.617488  919714 main.go:141] libmachine: (addons-963897) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 02:56:33.617507  919714 main.go:141] libmachine: (addons-963897) DBG | About to run SSH command:
	I0308 02:56:33.617521  919714 main.go:141] libmachine: (addons-963897) DBG | exit 0
	I0308 02:56:33.741209  919714 main.go:141] libmachine: (addons-963897) DBG | SSH cmd err, output: <nil>: 
	I0308 02:56:33.741535  919714 main.go:141] libmachine: (addons-963897) KVM machine creation complete!
	I0308 02:56:33.741912  919714 main.go:141] libmachine: (addons-963897) Calling .GetConfigRaw
	I0308 02:56:33.742495  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:33.742689  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:33.742901  919714 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 02:56:33.742916  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:56:33.744404  919714 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 02:56:33.744424  919714 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 02:56:33.744434  919714 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 02:56:33.744444  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:33.746885  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.747263  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:33.747288  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.747432  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:33.747649  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.747821  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.747997  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:33.748201  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:33.748415  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:33.748428  919714 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 02:56:33.856478  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 02:56:33.856508  919714 main.go:141] libmachine: Detecting the provisioner...
	I0308 02:56:33.856539  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:33.859416  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.859774  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:33.859796  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.859946  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:33.860130  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.860289  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.860429  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:33.860577  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:33.860746  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:33.860757  919714 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 02:56:33.970296  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 02:56:33.970472  919714 main.go:141] libmachine: found compatible host: buildroot
	I0308 02:56:33.970492  919714 main.go:141] libmachine: Provisioning with buildroot...
	I0308 02:56:33.970504  919714 main.go:141] libmachine: (addons-963897) Calling .GetMachineName
	I0308 02:56:33.970822  919714 buildroot.go:166] provisioning hostname "addons-963897"
	I0308 02:56:33.970858  919714 main.go:141] libmachine: (addons-963897) Calling .GetMachineName
	I0308 02:56:33.971059  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:33.973796  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.974155  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:33.974185  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:33.974300  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:33.974479  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.974627  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:33.974745  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:33.974888  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:33.975064  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:33.975078  919714 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-963897 && echo "addons-963897" | sudo tee /etc/hostname
	I0308 02:56:34.096145  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-963897
	
	I0308 02:56:34.096172  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.099259  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.099626  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.099646  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.099838  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.100037  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.100190  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.100333  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.100476  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:34.100640  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:34.100657  919714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-963897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-963897/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-963897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 02:56:34.218636  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 02:56:34.218673  919714 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 02:56:34.218696  919714 buildroot.go:174] setting up certificates
	I0308 02:56:34.218706  919714 provision.go:84] configureAuth start
	I0308 02:56:34.218715  919714 main.go:141] libmachine: (addons-963897) Calling .GetMachineName
	I0308 02:56:34.219055  919714 main.go:141] libmachine: (addons-963897) Calling .GetIP
	I0308 02:56:34.221559  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.222065  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.222096  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.222242  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.224300  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.224647  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.224679  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.224767  919714 provision.go:143] copyHostCerts
	I0308 02:56:34.224849  919714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 02:56:34.225003  919714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 02:56:34.225073  919714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 02:56:34.225165  919714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.addons-963897 san=[127.0.0.1 192.168.39.212 addons-963897 localhost minikube]
	I0308 02:56:34.308739  919714 provision.go:177] copyRemoteCerts
	I0308 02:56:34.308807  919714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 02:56:34.308831  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.311725  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.312000  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.312029  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.312274  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.312472  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.312646  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.312782  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:56:34.398552  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 02:56:34.423956  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 02:56:34.448834  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 02:56:34.473687  919714 provision.go:87] duration metric: took 254.966035ms to configureAuth
	I0308 02:56:34.473721  919714 buildroot.go:189] setting minikube options for container-runtime
	I0308 02:56:34.473899  919714 config.go:182] Loaded profile config "addons-963897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:56:34.473997  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.476509  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.476856  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.476887  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.477103  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.477303  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.477462  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.477637  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.477858  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:34.478055  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:34.478073  919714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 02:56:34.766247  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 02:56:34.766284  919714 main.go:141] libmachine: Checking connection to Docker...
	I0308 02:56:34.766293  919714 main.go:141] libmachine: (addons-963897) Calling .GetURL
	I0308 02:56:34.767691  919714 main.go:141] libmachine: (addons-963897) DBG | Using libvirt version 6000000
	I0308 02:56:34.769983  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.770383  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.770415  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.770546  919714 main.go:141] libmachine: Docker is up and running!
	I0308 02:56:34.770562  919714 main.go:141] libmachine: Reticulating splines...
	I0308 02:56:34.770572  919714 client.go:171] duration metric: took 27.96280746s to LocalClient.Create
	I0308 02:56:34.770608  919714 start.go:167] duration metric: took 27.962890254s to libmachine.API.Create "addons-963897"
	I0308 02:56:34.770624  919714 start.go:293] postStartSetup for "addons-963897" (driver="kvm2")
	I0308 02:56:34.770647  919714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 02:56:34.770683  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:34.770944  919714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 02:56:34.770972  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.773014  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.773328  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.773348  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.773504  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.773681  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.773838  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.773937  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:56:34.860527  919714 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 02:56:34.867006  919714 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 02:56:34.867039  919714 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 02:56:34.867153  919714 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 02:56:34.867200  919714 start.go:296] duration metric: took 96.561097ms for postStartSetup
	I0308 02:56:34.867281  919714 main.go:141] libmachine: (addons-963897) Calling .GetConfigRaw
	I0308 02:56:34.868410  919714 main.go:141] libmachine: (addons-963897) Calling .GetIP
	I0308 02:56:34.871163  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.871537  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.871565  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.871802  919714 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/config.json ...
	I0308 02:56:34.871997  919714 start.go:128] duration metric: took 28.081657288s to createHost
	I0308 02:56:34.872032  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.874219  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.874495  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.874522  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.874620  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.874802  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.874974  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.875149  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.875316  919714 main.go:141] libmachine: Using SSH client type: native
	I0308 02:56:34.875481  919714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I0308 02:56:34.875492  919714 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 02:56:34.986416  919714 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709866594.975038287
	
	I0308 02:56:34.986440  919714 fix.go:216] guest clock: 1709866594.975038287
	I0308 02:56:34.986449  919714 fix.go:229] Guest: 2024-03-08 02:56:34.975038287 +0000 UTC Remote: 2024-03-08 02:56:34.87201139 +0000 UTC m=+28.194153882 (delta=103.026897ms)
	I0308 02:56:34.986517  919714 fix.go:200] guest clock delta is within tolerance: 103.026897ms
	I0308 02:56:34.986529  919714 start.go:83] releasing machines lock for "addons-963897", held for 28.196268593s
	I0308 02:56:34.986558  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:34.986831  919714 main.go:141] libmachine: (addons-963897) Calling .GetIP
	I0308 02:56:34.989433  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.989837  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.989870  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.990020  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:34.990525  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:34.990692  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:56:34.990782  919714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 02:56:34.990835  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.990961  919714 ssh_runner.go:195] Run: cat /version.json
	I0308 02:56:34.990985  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:56:34.993591  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.993646  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.993979  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.994006  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.994032  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:34.994046  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:34.994219  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.994230  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:56:34.994449  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.994448  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:56:34.994631  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.994665  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:56:34.994800  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:56:34.994796  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:56:35.095084  919714 ssh_runner.go:195] Run: systemctl --version
	I0308 02:56:35.101287  919714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 02:56:35.266134  919714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 02:56:35.273838  919714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 02:56:35.273928  919714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 02:56:35.292074  919714 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 02:56:35.292100  919714 start.go:494] detecting cgroup driver to use...
	I0308 02:56:35.292185  919714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 02:56:35.309099  919714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 02:56:35.323651  919714 docker.go:217] disabling cri-docker service (if available) ...
	I0308 02:56:35.323733  919714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 02:56:35.337905  919714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 02:56:35.352279  919714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 02:56:35.469689  919714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 02:56:35.635586  919714 docker.go:233] disabling docker service ...
	I0308 02:56:35.635673  919714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 02:56:35.652520  919714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 02:56:35.668262  919714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 02:56:35.807341  919714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 02:56:35.943469  919714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 02:56:35.959043  919714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 02:56:35.979854  919714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 02:56:35.979934  919714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:56:35.991571  919714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 02:56:35.991650  919714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:56:36.003163  919714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:56:36.014453  919714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 02:56:36.025861  919714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 02:56:36.037373  919714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 02:56:36.047423  919714 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 02:56:36.047481  919714 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 02:56:36.062205  919714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 02:56:36.072505  919714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:56:36.213847  919714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 02:56:36.669833  919714 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 02:56:36.669948  919714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 02:56:36.675843  919714 start.go:562] Will wait 60s for crictl version
	I0308 02:56:36.675940  919714 ssh_runner.go:195] Run: which crictl
	I0308 02:56:36.680369  919714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 02:56:36.720141  919714 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 02:56:36.720247  919714 ssh_runner.go:195] Run: crio --version
	I0308 02:56:36.753286  919714 ssh_runner.go:195] Run: crio --version
	I0308 02:56:36.831919  919714 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 02:56:36.833474  919714 main.go:141] libmachine: (addons-963897) Calling .GetIP
	I0308 02:56:36.836310  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:36.836689  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:56:36.836721  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:56:36.836922  919714 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 02:56:36.841907  919714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:56:36.855261  919714 kubeadm.go:877] updating cluster {Name:addons-963897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-963897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 02:56:36.855401  919714 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 02:56:36.855465  919714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:56:36.891871  919714 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 02:56:36.891939  919714 ssh_runner.go:195] Run: which lz4
	I0308 02:56:36.896623  919714 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 02:56:36.901421  919714 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 02:56:36.901448  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 02:56:38.624204  919714 crio.go:444] duration metric: took 1.727625308s to copy over tarball
	I0308 02:56:38.624283  919714 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 02:56:41.730701  919714 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.10637778s)
	I0308 02:56:41.759039  919714 crio.go:451] duration metric: took 3.134790424s to extract the tarball
	I0308 02:56:41.759062  919714 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 02:56:41.802789  919714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 02:56:41.853553  919714 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 02:56:41.853589  919714 cache_images.go:84] Images are preloaded, skipping loading
	I0308 02:56:41.853601  919714 kubeadm.go:928] updating node { 192.168.39.212 8443 v1.28.4 crio true true} ...
	I0308 02:56:41.853775  919714 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-963897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-963897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 02:56:41.853901  919714 ssh_runner.go:195] Run: crio config
	I0308 02:56:41.902514  919714 cni.go:84] Creating CNI manager for ""
	I0308 02:56:41.902543  919714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 02:56:41.902565  919714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 02:56:41.902595  919714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-963897 NodeName:addons-963897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 02:56:41.902850  919714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-963897"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 02:56:41.902933  919714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 02:56:41.914745  919714 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 02:56:41.914812  919714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 02:56:41.925996  919714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0308 02:56:41.944466  919714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 02:56:41.962330  919714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0308 02:56:41.980171  919714 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I0308 02:56:41.984474  919714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 02:56:41.997823  919714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:56:42.121061  919714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:56:42.139313  919714 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897 for IP: 192.168.39.212
	I0308 02:56:42.139335  919714 certs.go:194] generating shared ca certs ...
	I0308 02:56:42.139354  919714 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.139502  919714 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 02:56:42.269336  919714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt ...
	I0308 02:56:42.269368  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt: {Name:mk63e7b9743d15c5f188579422c0c3293ecfe560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.269565  919714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key ...
	I0308 02:56:42.269580  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key: {Name:mk1d95277af6254627842063eb6b8af8df092c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.269683  919714 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 02:56:42.523959  919714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt ...
	I0308 02:56:42.523993  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt: {Name:mk6b81255792035948693c66639db73c9a6b1ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.524148  919714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key ...
	I0308 02:56:42.524160  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key: {Name:mk332af5d3325b097ee0bd0e540d7c1f38cfdad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.524231  919714 certs.go:256] generating profile certs ...
	I0308 02:56:42.524288  919714 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.key
	I0308 02:56:42.524301  919714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt with IP's: []
	I0308 02:56:42.699774  919714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt ...
	I0308 02:56:42.699810  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: {Name:mkde53591fb29cd836db65c969ee2abcc3216658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.699960  919714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.key ...
	I0308 02:56:42.699971  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.key: {Name:mke71c6995f10fd8a9ccdd7aed36560d7696b5bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.700044  919714 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key.82d10552
	I0308 02:56:42.700063  919714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt.82d10552 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.212]
	I0308 02:56:42.926167  919714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt.82d10552 ...
	I0308 02:56:42.926201  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt.82d10552: {Name:mkbcd858789aee333b3f591e0bddda6d6dc00a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.926383  919714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key.82d10552 ...
	I0308 02:56:42.926404  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key.82d10552: {Name:mkded3e733019efb973eea6d1ecdf9435bf41ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:42.926505  919714 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt.82d10552 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt
	I0308 02:56:42.926618  919714 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key.82d10552 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key
	I0308 02:56:42.926671  919714 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.key
	I0308 02:56:42.926690  919714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.crt with IP's: []
	I0308 02:56:43.179398  919714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.crt ...
	I0308 02:56:43.179436  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.crt: {Name:mk2d40796cffc4572fdb872f4761bff1d4d8fa64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:43.179628  919714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.key ...
	I0308 02:56:43.179647  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.key: {Name:mk4bb621a23d4f76aca9726a3e529b02d555f857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:56:43.179877  919714 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 02:56:43.179921  919714 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 02:56:43.179952  919714 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 02:56:43.179975  919714 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 02:56:43.180638  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 02:56:43.210254  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 02:56:43.240799  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 02:56:43.270960  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 02:56:43.300981  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 02:56:43.330900  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 02:56:43.360597  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 02:56:43.389551  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 02:56:43.417381  919714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 02:56:43.445586  919714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 02:56:43.464134  919714 ssh_runner.go:195] Run: openssl version
	I0308 02:56:43.470449  919714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 02:56:43.481792  919714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:56:43.486853  919714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:56:43.486915  919714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 02:56:43.493154  919714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 02:56:43.504725  919714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 02:56:43.509418  919714 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 02:56:43.509478  919714 kubeadm.go:391] StartCluster: {Name:addons-963897 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-963897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:56:43.509578  919714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 02:56:43.509626  919714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 02:56:43.553878  919714 cri.go:89] found id: ""
	I0308 02:56:43.553956  919714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 02:56:43.564781  919714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 02:56:43.574726  919714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 02:56:43.584572  919714 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 02:56:43.584590  919714 kubeadm.go:156] found existing configuration files:
	
	I0308 02:56:43.584626  919714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 02:56:43.593916  919714 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 02:56:43.593972  919714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 02:56:43.604988  919714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 02:56:43.614401  919714 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 02:56:43.614454  919714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 02:56:43.624012  919714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 02:56:43.633241  919714 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 02:56:43.633299  919714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 02:56:43.642663  919714 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 02:56:43.651815  919714 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 02:56:43.651868  919714 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 02:56:43.661531  919714 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 02:56:43.875487  919714 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 02:56:54.078848  919714 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 02:56:54.078929  919714 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 02:56:54.078996  919714 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 02:56:54.079143  919714 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 02:56:54.079293  919714 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 02:56:54.079397  919714 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 02:56:54.080840  919714 out.go:204]   - Generating certificates and keys ...
	I0308 02:56:54.080917  919714 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 02:56:54.080989  919714 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 02:56:54.081080  919714 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 02:56:54.081152  919714 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 02:56:54.081236  919714 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 02:56:54.081333  919714 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 02:56:54.081421  919714 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 02:56:54.081569  919714 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-963897 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0308 02:56:54.081654  919714 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 02:56:54.081803  919714 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-963897 localhost] and IPs [192.168.39.212 127.0.0.1 ::1]
	I0308 02:56:54.081913  919714 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 02:56:54.082007  919714 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 02:56:54.082071  919714 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 02:56:54.082143  919714 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 02:56:54.082228  919714 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 02:56:54.082311  919714 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 02:56:54.082391  919714 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 02:56:54.082476  919714 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 02:56:54.082582  919714 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 02:56:54.082689  919714 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 02:56:54.084260  919714 out.go:204]   - Booting up control plane ...
	I0308 02:56:54.084393  919714 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 02:56:54.084495  919714 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 02:56:54.084580  919714 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 02:56:54.084718  919714 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 02:56:54.084833  919714 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 02:56:54.084896  919714 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 02:56:54.085089  919714 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 02:56:54.085179  919714 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.009521 seconds
	I0308 02:56:54.085322  919714 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 02:56:54.085500  919714 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 02:56:54.085583  919714 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 02:56:54.085779  919714 kubeadm.go:309] [mark-control-plane] Marking the node addons-963897 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 02:56:54.085850  919714 kubeadm.go:309] [bootstrap-token] Using token: emdbku.4lk9xhvzvmknz6ak
	I0308 02:56:54.087088  919714 out.go:204]   - Configuring RBAC rules ...
	I0308 02:56:54.087188  919714 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 02:56:54.087275  919714 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 02:56:54.087475  919714 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 02:56:54.087676  919714 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 02:56:54.087838  919714 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 02:56:54.087948  919714 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 02:56:54.088103  919714 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 02:56:54.088154  919714 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 02:56:54.088229  919714 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 02:56:54.088239  919714 kubeadm.go:309] 
	I0308 02:56:54.088318  919714 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 02:56:54.088331  919714 kubeadm.go:309] 
	I0308 02:56:54.088438  919714 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 02:56:54.088447  919714 kubeadm.go:309] 
	I0308 02:56:54.088481  919714 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 02:56:54.088573  919714 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 02:56:54.088645  919714 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 02:56:54.088670  919714 kubeadm.go:309] 
	I0308 02:56:54.088744  919714 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 02:56:54.088754  919714 kubeadm.go:309] 
	I0308 02:56:54.088813  919714 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 02:56:54.088820  919714 kubeadm.go:309] 
	I0308 02:56:54.088862  919714 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 02:56:54.088924  919714 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 02:56:54.088995  919714 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 02:56:54.089013  919714 kubeadm.go:309] 
	I0308 02:56:54.089140  919714 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 02:56:54.089254  919714 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 02:56:54.089283  919714 kubeadm.go:309] 
	I0308 02:56:54.089393  919714 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token emdbku.4lk9xhvzvmknz6ak \
	I0308 02:56:54.089522  919714 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 02:56:54.089570  919714 kubeadm.go:309] 	--control-plane 
	I0308 02:56:54.089584  919714 kubeadm.go:309] 
	I0308 02:56:54.089719  919714 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 02:56:54.089735  919714 kubeadm.go:309] 
	I0308 02:56:54.089845  919714 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token emdbku.4lk9xhvzvmknz6ak \
	I0308 02:56:54.090004  919714 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 02:56:54.090021  919714 cni.go:84] Creating CNI manager for ""
	I0308 02:56:54.090029  919714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 02:56:54.091481  919714 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 02:56:54.093031  919714 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 02:56:54.108565  919714 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 02:56:54.190668  919714 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 02:56:54.190853  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-963897 minikube.k8s.io/updated_at=2024_03_08T02_56_54_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=addons-963897 minikube.k8s.io/primary=true
	I0308 02:56:54.190852  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:54.263324  919714 ops.go:34] apiserver oom_adj: -16
	I0308 02:56:54.382320  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:54.882940  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:55.383279  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:55.882498  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:56.383285  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:56.882997  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:57.382454  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:57.882746  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:58.383258  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:58.883102  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:59.383104  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:56:59.883358  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:00.383236  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:00.882777  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:01.382448  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:01.882629  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:02.382458  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:02.883260  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:03.383167  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:03.883060  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:04.383379  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:04.882780  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:05.382514  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:05.883070  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:06.382403  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:06.882655  919714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 02:57:07.018706  919714 kubeadm.go:1106] duration metric: took 12.827939329s to wait for elevateKubeSystemPrivileges
	W0308 02:57:07.018753  919714 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 02:57:07.018765  919714 kubeadm.go:393] duration metric: took 23.509294067s to StartCluster
	I0308 02:57:07.018799  919714 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:57:07.018934  919714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 02:57:07.019369  919714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 02:57:07.019582  919714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 02:57:07.019618  919714 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 02:57:07.021355  919714 out.go:177] * Verifying Kubernetes components...
	I0308 02:57:07.019705  919714 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0308 02:57:07.019876  919714 config.go:182] Loaded profile config "addons-963897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:57:07.023035  919714 addons.go:69] Setting cloud-spanner=true in profile "addons-963897"
	I0308 02:57:07.023058  919714 addons.go:69] Setting yakd=true in profile "addons-963897"
	I0308 02:57:07.023069  919714 addons.go:69] Setting inspektor-gadget=true in profile "addons-963897"
	I0308 02:57:07.023109  919714 addons.go:234] Setting addon cloud-spanner=true in "addons-963897"
	I0308 02:57:07.023119  919714 addons.go:234] Setting addon inspektor-gadget=true in "addons-963897"
	I0308 02:57:07.023123  919714 addons.go:69] Setting storage-provisioner=true in profile "addons-963897"
	I0308 02:57:07.023136  919714 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-963897"
	I0308 02:57:07.023143  919714 addons.go:234] Setting addon storage-provisioner=true in "addons-963897"
	I0308 02:57:07.023154  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023155  919714 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-963897"
	I0308 02:57:07.023162  919714 addons.go:69] Setting gcp-auth=true in profile "addons-963897"
	I0308 02:57:07.023158  919714 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-963897"
	I0308 02:57:07.023173  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023176  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023180  919714 mustload.go:65] Loading cluster: addons-963897
	I0308 02:57:07.023211  919714 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-963897"
	I0308 02:57:07.023124  919714 addons.go:69] Setting metrics-server=true in profile "addons-963897"
	I0308 02:57:07.023290  919714 addons.go:69] Setting helm-tiller=true in profile "addons-963897"
	I0308 02:57:07.023315  919714 addons.go:234] Setting addon metrics-server=true in "addons-963897"
	I0308 02:57:07.023322  919714 addons.go:69] Setting ingress=true in profile "addons-963897"
	I0308 02:57:07.023338  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023348  919714 addons.go:69] Setting ingress-dns=true in profile "addons-963897"
	I0308 02:57:07.023359  919714 config.go:182] Loaded profile config "addons-963897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 02:57:07.023369  919714 addons.go:234] Setting addon ingress-dns=true in "addons-963897"
	I0308 02:57:07.023436  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023649  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.023692  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023701  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.023721  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.023736  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023745  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023785  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.023114  919714 addons.go:69] Setting registry=true in profile "addons-963897"
	I0308 02:57:07.023815  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023844  919714 addons.go:234] Setting addon registry=true in "addons-963897"
	I0308 02:57:07.023882  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023317  919714 addons.go:234] Setting addon helm-tiller=true in "addons-963897"
	I0308 02:57:07.023910  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.023929  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023938  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.024222  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.024246  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.024260  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.024269  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.023154  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023114  919714 addons.go:234] Setting addon yakd=true in "addons-963897"
	I0308 02:57:07.024523  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.024541  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.024590  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.024733  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.024758  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.024868  919714 addons.go:69] Setting default-storageclass=true in profile "addons-963897"
	I0308 02:57:07.024897  919714 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-963897"
	I0308 02:57:07.024969  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.025018  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.025175  919714 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-963897"
	I0308 02:57:07.025269  919714 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-963897"
	I0308 02:57:07.025338  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023047  919714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 02:57:07.028321  919714 addons.go:69] Setting volumesnapshots=true in profile "addons-963897"
	I0308 02:57:07.023339  919714 addons.go:234] Setting addon ingress=true in "addons-963897"
	I0308 02:57:07.028355  919714 addons.go:234] Setting addon volumesnapshots=true in "addons-963897"
	I0308 02:57:07.028384  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.023647  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.028456  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.028385  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.046391  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I0308 02:57:07.046460  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I0308 02:57:07.047025  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.047111  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.047915  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.047940  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.048382  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.048813  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.048833  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.048990  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.049023  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.049315  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.049897  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.049948  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.052497  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I0308 02:57:07.052866  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.052954  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0308 02:57:07.053238  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.053418  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.053443  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.053808  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.054010  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.054031  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.054490  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.054535  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.055261  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.055520  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.056649  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36033
	I0308 02:57:07.057067  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.057562  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.057582  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.057614  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.057711  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.057727  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.057766  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.057821  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.057837  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.058116  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.058124  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.058155  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.058450  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.058469  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.058903  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.059472  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.059497  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.061929  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0308 02:57:07.061975  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0308 02:57:07.062394  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.062989  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.063010  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.063422  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.064027  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.064054  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.064261  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0308 02:57:07.064734  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.065306  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.065335  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.066031  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.066603  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.066649  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.069684  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.070264  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.070282  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.070808  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.071319  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.071351  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.071934  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33453
	I0308 02:57:07.072322  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.072890  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.072909  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.073324  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.073552  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.078020  919714 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-963897"
	I0308 02:57:07.078071  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.078430  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.078464  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.096297  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0308 02:57:07.096870  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.097566  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.097593  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.098148  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.098334  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0308 02:57:07.098532  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.100353  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0308 02:57:07.100892  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.101697  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.101720  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.101787  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0308 02:57:07.102008  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0308 02:57:07.102190  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.102200  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40421
	I0308 02:57:07.102668  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.102741  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.102765  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.102798  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.103126  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.103201  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.103247  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.103273  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.103452  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.103702  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.104307  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.104333  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.104462  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.104480  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.104993  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.105060  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.105122  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0308 02:57:07.105287  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.105500  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.105571  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.108861  919714 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0308 02:57:07.106013  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.106349  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.106436  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.107048  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.109141  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.110631  919714 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0308 02:57:07.110646  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0308 02:57:07.110665  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.110720  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.112632  919714 out.go:177]   - Using image docker.io/registry:2.8.3
	I0308 02:57:07.111615  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.111651  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.112119  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0308 02:57:07.113522  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.115834  919714 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0308 02:57:07.114178  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.114816  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.114897  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.115092  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.115569  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.115732  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.117118  919714 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0308 02:57:07.118225  919714 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0308 02:57:07.119911  919714 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:57:07.119924  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0308 02:57:07.118235  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0308 02:57:07.119968  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.118267  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.120034  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.118502  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.121450  919714 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0308 02:57:07.118789  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.119747  919714 addons.go:234] Setting addon default-storageclass=true in "addons-963897"
	I0308 02:57:07.119939  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.120221  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.121559  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.123560  919714 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0308 02:57:07.123576  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0308 02:57:07.123593  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.121607  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:07.122495  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.124021  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.124067  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.123024  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0308 02:57:07.123039  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0308 02:57:07.123409  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.124113  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.124960  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.125261  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.125298  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.125399  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.125485  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I0308 02:57:07.126101  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42261
	I0308 02:57:07.126422  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.126438  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.126531  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.126820  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.126887  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.126910  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.127312  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.127343  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.127359  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0308 02:57:07.127512  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.127616  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.127666  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.127706  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.127728  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.127746  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.127766  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.127812  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.127879  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.127913  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.127941  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.128341  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.128384  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.128643  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.128712  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.128889  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.128906  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.129018  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.129033  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.129344  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.129363  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.129375  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.129563  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.129684  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.129738  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
	I0308 02:57:07.129865  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.130340  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.130483  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.130503  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.130604  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.131121  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.131139  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.131316  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.131451  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.131463  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.131967  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.131991  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.132341  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.132409  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.132576  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.134453  919714 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 02:57:07.134484  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0308 02:57:07.137323  919714 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0308 02:57:07.133076  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.133559  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.137419  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.132839  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.135907  919714 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:57:07.137495  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 02:57:07.137519  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.136250  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.139178  919714 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 02:57:07.139199  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 02:57:07.138116  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.139219  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.139252  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.138630  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.139274  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.139460  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.139765  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.141835  919714 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0308 02:57:07.140673  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.142267  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.142991  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.143013  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.142919  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.143054  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.143191  919714 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0308 02:57:07.143203  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0308 02:57:07.143217  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.143314  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.143974  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.144240  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.144518  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.144544  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.144563  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.144684  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.144903  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.145052  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.145183  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.147063  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.147408  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.147429  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.147588  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.147765  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.147920  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.148074  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.148655  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I0308 02:57:07.149245  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.149918  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.149941  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.150442  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.151105  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:07.151150  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:07.156470  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0308 02:57:07.157040  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.157518  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.157535  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.157945  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I0308 02:57:07.158095  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.158358  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.158375  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.158877  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.158899  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.159284  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.159598  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.160415  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.161368  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.163001  919714 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:57:07.164497  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0308 02:57:07.166025  919714 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:57:07.165952  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0308 02:57:07.168644  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0308 02:57:07.167367  919714 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0308 02:57:07.168381  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36341
	I0308 02:57:07.169860  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0308 02:57:07.171318  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0308 02:57:07.172619  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0308 02:57:07.171364  919714 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:57:07.170485  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.170471  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.172658  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0308 02:57:07.172686  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.173234  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.174091  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.174147  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0308 02:57:07.173381  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.174520  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.175964  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0308 02:57:07.176019  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.176168  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.176519  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45713
	I0308 02:57:07.177212  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0308 02:57:07.177592  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0308 02:57:07.178828  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0308 02:57:07.178849  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0308 02:57:07.177774  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.178867  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.178896  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.178165  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.178928  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.178223  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.178340  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.178455  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.179142  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.179206  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.179349  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.179672  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.180065  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.180091  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.180515  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.180609  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.180637  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.181262  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:07.181307  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.181357  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.181510  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.181573  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.183762  919714 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0308 02:57:07.185314  919714 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0308 02:57:07.183008  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.183032  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.183360  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.183943  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.186920  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.186978  919714 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:57:07.189332  919714 out.go:177]   - Using image docker.io/busybox:stable
	I0308 02:57:07.188184  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.188213  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0308 02:57:07.188246  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0308 02:57:07.188342  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.192130  919714 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0308 02:57:07.190740  919714 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0308 02:57:07.190761  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.190772  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0308 02:57:07.190938  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.194837  919714 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0308 02:57:07.194890  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0308 02:57:07.194917  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.193449  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.193519  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.193553  919714 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:57:07.195281  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0308 02:57:07.195314  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.197935  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.198491  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.198519  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.198804  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.199120  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.199290  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.199509  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.199650  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.199891  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.199927  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.199949  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.200256  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.200352  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.200372  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.200458  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.200629  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.200637  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.200777  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.200853  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.200909  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.201019  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.201165  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.201602  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.201621  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.201830  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.201969  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.202103  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0308 02:57:07.202104  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.202310  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:07.202563  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:07.203026  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:07.203048  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:07.203502  919714 main.go:141] libmachine: () Calling .GetMachineName
	W0308 02:57:07.203571  919714 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49220->192.168.39.212:22: read: connection reset by peer
	I0308 02:57:07.203618  919714 retry.go:31] will retry after 190.22627ms: ssh: handshake failed: read tcp 192.168.39.1:49220->192.168.39.212:22: read: connection reset by peer
	I0308 02:57:07.203714  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:07.205142  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:07.205414  919714 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 02:57:07.205430  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 02:57:07.205442  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:07.207765  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.208014  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:07.208042  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:07.208148  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:07.208332  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:07.208480  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:07.208623  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	W0308 02:57:07.209287  919714 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49236->192.168.39.212:22: read: connection reset by peer
	I0308 02:57:07.209317  919714 retry.go:31] will retry after 227.532847ms: ssh: handshake failed: read tcp 192.168.39.1:49236->192.168.39.212:22: read: connection reset by peer
	I0308 02:57:07.429469  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0308 02:57:07.454027  919714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 02:57:07.454189  919714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 02:57:07.491918  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0308 02:57:07.496498  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0308 02:57:07.496521  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0308 02:57:07.543196  919714 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0308 02:57:07.543220  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0308 02:57:07.590324  919714 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0308 02:57:07.590349  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0308 02:57:07.621960  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0308 02:57:07.626919  919714 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 02:57:07.626938  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0308 02:57:07.628123  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 02:57:07.655556  919714 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0308 02:57:07.655582  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0308 02:57:07.658490  919714 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0308 02:57:07.658514  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0308 02:57:07.666386  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0308 02:57:07.666408  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0308 02:57:07.689801  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0308 02:57:07.721926  919714 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0308 02:57:07.721955  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0308 02:57:07.731428  919714 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:57:07.731451  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0308 02:57:07.776865  919714 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0308 02:57:07.776892  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0308 02:57:07.829825  919714 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 02:57:07.829854  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 02:57:07.848902  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0308 02:57:07.851433  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0308 02:57:07.851465  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0308 02:57:07.860677  919714 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0308 02:57:07.860706  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0308 02:57:07.862408  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 02:57:07.871894  919714 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:57:07.871915  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0308 02:57:07.888656  919714 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0308 02:57:07.888676  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0308 02:57:07.960542  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0308 02:57:07.965268  919714 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0308 02:57:07.965308  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0308 02:57:07.993408  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0308 02:57:08.047724  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0308 02:57:08.047753  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0308 02:57:08.057568  919714 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:57:08.057593  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 02:57:08.070521  919714 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0308 02:57:08.070546  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0308 02:57:08.141914  919714 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0308 02:57:08.141944  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0308 02:57:08.183465  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0308 02:57:08.183493  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0308 02:57:08.279266  919714 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:57:08.279299  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0308 02:57:08.349800  919714 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0308 02:57:08.349838  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0308 02:57:08.362717  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 02:57:08.459527  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0308 02:57:08.459554  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0308 02:57:08.535736  919714 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0308 02:57:08.535768  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0308 02:57:08.574833  919714 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:57:08.574861  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0308 02:57:08.622080  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0308 02:57:08.710572  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0308 02:57:08.710600  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0308 02:57:08.771900  919714 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0308 02:57:08.771941  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0308 02:57:08.871233  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:57:08.903223  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0308 02:57:08.903257  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0308 02:57:09.030833  919714 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0308 02:57:09.030869  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0308 02:57:09.227795  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0308 02:57:09.227821  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0308 02:57:09.404436  919714 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:57:09.404460  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0308 02:57:09.782149  919714 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:57:09.782182  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0308 02:57:09.900669  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0308 02:57:10.216437  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0308 02:57:12.549734  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.120215422s)
	I0308 02:57:12.549840  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:12.549849  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:12.549869  919714 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.095759418s)
	I0308 02:57:12.549780  919714 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.095554454s)
	I0308 02:57:12.549916  919714 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0308 02:57:12.550305  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:12.550369  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:12.550384  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:12.550393  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:12.550712  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:12.550765  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:12.550786  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:12.551027  919714 node_ready.go:35] waiting up to 6m0s for node "addons-963897" to be "Ready" ...
	I0308 02:57:12.581646  919714 node_ready.go:49] node "addons-963897" has status "Ready":"True"
	I0308 02:57:12.581680  919714 node_ready.go:38] duration metric: took 30.627203ms for node "addons-963897" to be "Ready" ...
	I0308 02:57:12.581691  919714 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:57:12.606568  919714 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhr8f" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.054981  919714 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-963897" context rescaled to 1 replicas
	I0308 02:57:13.689119  919714 pod_ready.go:92] pod "coredns-5dd5756b68-dhr8f" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:13.689161  919714 pod_ready.go:81] duration metric: took 1.082548297s for pod "coredns-5dd5756b68-dhr8f" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.689176  919714 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nkp5t" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.725657  919714 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0308 02:57:13.725705  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:13.728418  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:13.728829  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:13.728855  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:13.729060  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:13.729308  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:13.729491  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:13.729653  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:13.799055  919714 pod_ready.go:92] pod "coredns-5dd5756b68-nkp5t" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:13.799083  919714 pod_ready.go:81] duration metric: took 109.896356ms for pod "coredns-5dd5756b68-nkp5t" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.799103  919714 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.888167  919714 pod_ready.go:92] pod "etcd-addons-963897" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:13.888199  919714 pod_ready.go:81] duration metric: took 89.087059ms for pod "etcd-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.888216  919714 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.995639  919714 pod_ready.go:92] pod "kube-apiserver-addons-963897" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:13.995668  919714 pod_ready.go:81] duration metric: took 107.442324ms for pod "kube-apiserver-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:13.995683  919714 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.108671  919714 pod_ready.go:92] pod "kube-controller-manager-addons-963897" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:14.108700  919714 pod_ready.go:81] duration metric: took 113.010005ms for pod "kube-controller-manager-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.108711  919714 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42bsl" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.204639  919714 pod_ready.go:92] pod "kube-proxy-42bsl" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:14.204664  919714 pod_ready.go:81] duration metric: took 95.946687ms for pod "kube-proxy-42bsl" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.204675  919714 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.217675  919714 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0308 02:57:14.513551  919714 addons.go:234] Setting addon gcp-auth=true in "addons-963897"
	I0308 02:57:14.513624  919714 host.go:66] Checking if "addons-963897" exists ...
	I0308 02:57:14.513953  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:14.513986  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:14.529917  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0308 02:57:14.530530  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:14.531172  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:14.531201  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:14.531641  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:14.532257  919714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 02:57:14.532310  919714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 02:57:14.548529  919714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45439
	I0308 02:57:14.548964  919714 main.go:141] libmachine: () Calling .GetVersion
	I0308 02:57:14.549561  919714 main.go:141] libmachine: Using API Version  1
	I0308 02:57:14.549588  919714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 02:57:14.549988  919714 main.go:141] libmachine: () Calling .GetMachineName
	I0308 02:57:14.550206  919714 main.go:141] libmachine: (addons-963897) Calling .GetState
	I0308 02:57:14.551677  919714 main.go:141] libmachine: (addons-963897) Calling .DriverName
	I0308 02:57:14.551898  919714 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0308 02:57:14.551928  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHHostname
	I0308 02:57:14.555254  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:14.555704  919714 main.go:141] libmachine: (addons-963897) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9d:15", ip: ""} in network mk-addons-963897: {Iface:virbr1 ExpiryTime:2024-03-08 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4c:9d:15 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:addons-963897 Clientid:01:52:54:00:4c:9d:15}
	I0308 02:57:14.555737  919714 main.go:141] libmachine: (addons-963897) DBG | domain addons-963897 has defined IP address 192.168.39.212 and MAC address 52:54:00:4c:9d:15 in network mk-addons-963897
	I0308 02:57:14.555926  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHPort
	I0308 02:57:14.556143  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHKeyPath
	I0308 02:57:14.556334  919714 main.go:141] libmachine: (addons-963897) Calling .GetSSHUsername
	I0308 02:57:14.556503  919714 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/addons-963897/id_rsa Username:docker}
	I0308 02:57:14.671141  919714 pod_ready.go:92] pod "kube-scheduler-addons-963897" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:14.671182  919714 pod_ready.go:81] duration metric: took 466.497693ms for pod "kube-scheduler-addons-963897" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:14.671202  919714 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:16.687380  919714 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace has status "Ready":"False"
	I0308 02:57:17.228493  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.736535259s)
	I0308 02:57:17.228560  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228562  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.606562144s)
	I0308 02:57:17.228629  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228647  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228574  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228724  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.60056474s)
	I0308 02:57:17.228755  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.538930681s)
	I0308 02:57:17.228806  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228822  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228846  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.379908801s)
	I0308 02:57:17.228765  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228875  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228881  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228899  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228938  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.366507539s)
	I0308 02:57:17.228961  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.228969  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.228992  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229044  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.268468499s)
	I0308 02:57:17.229065  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229075  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229134  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.235700111s)
	I0308 02:57:17.229159  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229169  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229326  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.229378  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229401  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229410  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229431  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.229460  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.229488  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229495  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229504  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229512  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229546  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229567  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229576  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229584  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229585  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229594  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229602  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229609  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229654  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.229676  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229682  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229690  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229695  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.229735  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.229753  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.229759  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.229769  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.229775  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.230895  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.230939  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.230948  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.231257  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.231274  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.231279  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.868525886s)
	I0308 02:57:17.231309  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.231328  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.231411  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.609293549s)
	I0308 02:57:17.231435  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.231451  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.231623  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.36035435s)
	W0308 02:57:17.231666  919714 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:57:17.231691  919714 retry.go:31] will retry after 318.402753ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0308 02:57:17.231784  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.331073746s)
	I0308 02:57:17.231810  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.231823  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.231913  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.231946  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.231957  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.231965  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.231976  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.232030  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.232051  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.232058  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.232068  919714 addons.go:470] Verifying addon ingress=true in "addons-963897"
	I0308 02:57:17.233676  919714 out.go:177] * Verifying ingress addon...
	I0308 02:57:17.235228  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.235266  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.235298  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.235351  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.235366  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.235372  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.235375  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.235383  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.233430  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.235513  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.233458  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.233485  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.235705  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.237417  919714 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-963897 service yakd-dashboard -n yakd-dashboard
	
	I0308 02:57:17.235989  919714 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0308 02:57:17.233522  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.233528  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.233550  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.233553  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.233574  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.233596  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.236032  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.236049  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.233501  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.238089  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.238111  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.239335  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.239358  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.239362  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.239372  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.239404  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.239349  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.239426  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.239435  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.239442  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.239453  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.239461  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.240127  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.240169  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.240194  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.240142  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.240221  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.240247  919714 addons.go:470] Verifying addon registry=true in "addons-963897"
	I0308 02:57:17.240162  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.241690  919714 out.go:177] * Verifying registry addon...
	I0308 02:57:17.240162  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.240209  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.242882  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.242905  919714 addons.go:470] Verifying addon metrics-server=true in "addons-963897"
	I0308 02:57:17.243464  919714 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0308 02:57:17.252122  919714 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0308 02:57:17.252153  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:17.277941  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.277970  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.278299  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.278341  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	W0308 02:57:17.278443  919714 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0308 02:57:17.279443  919714 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0308 02:57:17.279462  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:17.292251  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:17.292272  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:17.292586  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:17.292626  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:17.292650  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:17.550817  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0308 02:57:17.747929  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:17.754796  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:18.274513  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:18.275259  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:18.745110  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:18.750363  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:19.184425  919714 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace has status "Ready":"False"
	I0308 02:57:19.250020  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:19.254548  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:19.711013  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.49451732s)
	I0308 02:57:19.711036  919714 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.159115363s)
	I0308 02:57:19.711098  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:19.711240  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:19.713055  919714 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0308 02:57:19.711635  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:19.711671  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:19.714148  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:19.714163  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:19.714173  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:19.715689  919714 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0308 02:57:19.714453  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:19.714478  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:19.715737  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:19.715751  919714 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-963897"
	I0308 02:57:19.717225  919714 out.go:177] * Verifying csi-hostpath-driver addon...
	I0308 02:57:19.718951  919714 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0308 02:57:19.718976  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0308 02:57:19.719587  919714 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0308 02:57:19.732469  919714 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0308 02:57:19.732494  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:19.761334  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:19.770788  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:19.940764  919714 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0308 02:57:19.940802  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0308 02:57:19.961407  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.41052868s)
	I0308 02:57:19.961471  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:19.961486  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:19.961805  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:19.961835  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:19.961831  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:19.961845  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:19.961854  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:19.962143  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:19.962164  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:20.025795  919714 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:57:20.025824  919714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0308 02:57:20.084107  919714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0308 02:57:20.227652  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:20.243880  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:20.247886  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:20.734385  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:20.743651  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:20.750652  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:21.225449  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:21.243659  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:21.250032  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:21.665115  919714 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.580933564s)
	I0308 02:57:21.665201  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:21.665221  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:21.665513  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:21.665566  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:21.665586  919714 main.go:141] libmachine: Making call to close driver server
	I0308 02:57:21.665598  919714 main.go:141] libmachine: (addons-963897) Calling .Close
	I0308 02:57:21.665935  919714 main.go:141] libmachine: Successfully made call to close driver server
	I0308 02:57:21.665956  919714 main.go:141] libmachine: (addons-963897) DBG | Closing plugin on server side
	I0308 02:57:21.665964  919714 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 02:57:21.668134  919714 addons.go:470] Verifying addon gcp-auth=true in "addons-963897"
	I0308 02:57:21.670094  919714 out.go:177] * Verifying gcp-auth addon...
	I0308 02:57:21.672730  919714 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0308 02:57:21.687939  919714 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0308 02:57:21.687959  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:21.706796  919714 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace has status "Ready":"False"
	I0308 02:57:21.736504  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:21.748862  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:21.754353  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:22.176985  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:22.226041  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:22.244335  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:22.249227  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:22.677606  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:22.726623  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:22.743783  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:22.747795  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:23.177123  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:23.225491  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:23.244011  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:23.248282  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:23.677564  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:23.725719  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:23.743715  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:23.748001  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:24.329300  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:24.335133  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:24.335482  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:24.340699  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:24.340902  919714 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace has status "Ready":"False"
	I0308 02:57:24.677126  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:24.725742  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:24.743431  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:24.750004  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:25.178856  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:25.226778  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:25.244800  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:25.248705  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:25.677929  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:25.726158  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:25.751167  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:25.757695  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:26.178739  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:26.226084  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:26.243968  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:26.248169  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:26.677303  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:26.679340  919714 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace has status "Ready":"True"
	I0308 02:57:26.679366  919714 pod_ready.go:81] duration metric: took 12.008154346s for pod "nvidia-device-plugin-daemonset-bcff4" in "kube-system" namespace to be "Ready" ...
	I0308 02:57:26.679380  919714 pod_ready.go:38] duration metric: took 14.097678424s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 02:57:26.679404  919714 api_server.go:52] waiting for apiserver process to appear ...
	I0308 02:57:26.679473  919714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 02:57:26.699022  919714 api_server.go:72] duration metric: took 19.679356499s to wait for apiserver process to appear ...
	I0308 02:57:26.699049  919714 api_server.go:88] waiting for apiserver healthz status ...
	I0308 02:57:26.699101  919714 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I0308 02:57:26.703341  919714 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I0308 02:57:26.704944  919714 api_server.go:141] control plane version: v1.28.4
	I0308 02:57:26.704974  919714 api_server.go:131] duration metric: took 5.91424ms to wait for apiserver health ...
	I0308 02:57:26.704986  919714 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 02:57:26.713927  919714 system_pods.go:59] 18 kube-system pods found
	I0308 02:57:26.713951  919714 system_pods.go:61] "coredns-5dd5756b68-nkp5t" [6d3c210e-fccf-4b3d-8c1c-e33ac749ad72] Running
	I0308 02:57:26.713958  919714 system_pods.go:61] "csi-hostpath-attacher-0" [e7b73a09-3a11-4e03-9baa-328efd81e7ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:57:26.713965  919714 system_pods.go:61] "csi-hostpath-resizer-0" [875f9ee5-2c21-4822-974d-63d5c9fdfde5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:57:26.713974  919714 system_pods.go:61] "csi-hostpathplugin-87cgp" [f813ac78-0e4f-4a63-9a2f-b2a384a116d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:57:26.713979  919714 system_pods.go:61] "etcd-addons-963897" [883130ec-7027-4dce-852a-2d52fd6bfe10] Running
	I0308 02:57:26.713984  919714 system_pods.go:61] "kube-apiserver-addons-963897" [66760eea-ff45-4b01-9eee-e86566485848] Running
	I0308 02:57:26.713989  919714 system_pods.go:61] "kube-controller-manager-addons-963897" [dcca1db8-04e9-425b-8a65-939c5afd4b07] Running
	I0308 02:57:26.713999  919714 system_pods.go:61] "kube-ingress-dns-minikube" [f3524dd5-1994-4650-80c6-d18fef44db57] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:57:26.714008  919714 system_pods.go:61] "kube-proxy-42bsl" [abbbc3f6-509a-44d1-9738-8e887f2d44af] Running
	I0308 02:57:26.714016  919714 system_pods.go:61] "kube-scheduler-addons-963897" [bdcc331d-9f63-4641-82d8-0a46de31cc53] Running
	I0308 02:57:26.714024  919714 system_pods.go:61] "metrics-server-69cf46c98-szqb7" [6456987a-f2c2-4dd8-9fd2-268027357dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 02:57:26.714035  919714 system_pods.go:61] "nvidia-device-plugin-daemonset-bcff4" [8fec37a2-1bb5-4f90-ada2-d022b6694cf3] Running
	I0308 02:57:26.714041  919714 system_pods.go:61] "registry-proxy-4snpq" [07f9d0bd-1ed8-4806-826e-1720b7cf2dbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:57:26.714047  919714 system_pods.go:61] "registry-rs9mh" [96e3cb85-f90b-45c0-b9d4-9c2c2da9ad88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0308 02:57:26.714054  919714 system_pods.go:61] "snapshot-controller-58dbcc7b99-mwn74" [7bbae1e6-778b-43ab-bdb5-aca88430b5c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0308 02:57:26.714062  919714 system_pods.go:61] "snapshot-controller-58dbcc7b99-ztckn" [4dc8ccf5-8fb7-44e9-a713-a955169da81f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0308 02:57:26.714066  919714 system_pods.go:61] "storage-provisioner" [58345b2b-7c94-4525-9265-4a479388495a] Running
	I0308 02:57:26.714071  919714 system_pods.go:61] "tiller-deploy-7b677967b9-gk6tb" [c562c869-b9e4-4778-b548-5329e8e7ff62] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:57:26.714079  919714 system_pods.go:74] duration metric: took 9.085831ms to wait for pod list to return data ...
	I0308 02:57:26.714090  919714 default_sa.go:34] waiting for default service account to be created ...
	I0308 02:57:26.716229  919714 default_sa.go:45] found service account: "default"
	I0308 02:57:26.716245  919714 default_sa.go:55] duration metric: took 2.148367ms for default service account to be created ...
	I0308 02:57:26.716252  919714 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 02:57:26.725501  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:26.725826  919714 system_pods.go:86] 18 kube-system pods found
	I0308 02:57:26.727565  919714 system_pods.go:89] "coredns-5dd5756b68-nkp5t" [6d3c210e-fccf-4b3d-8c1c-e33ac749ad72] Running
	I0308 02:57:26.727585  919714 system_pods.go:89] "csi-hostpath-attacher-0" [e7b73a09-3a11-4e03-9baa-328efd81e7ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0308 02:57:26.727593  919714 system_pods.go:89] "csi-hostpath-resizer-0" [875f9ee5-2c21-4822-974d-63d5c9fdfde5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0308 02:57:26.727600  919714 system_pods.go:89] "csi-hostpathplugin-87cgp" [f813ac78-0e4f-4a63-9a2f-b2a384a116d6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0308 02:57:26.727611  919714 system_pods.go:89] "etcd-addons-963897" [883130ec-7027-4dce-852a-2d52fd6bfe10] Running
	I0308 02:57:26.727617  919714 system_pods.go:89] "kube-apiserver-addons-963897" [66760eea-ff45-4b01-9eee-e86566485848] Running
	I0308 02:57:26.727622  919714 system_pods.go:89] "kube-controller-manager-addons-963897" [dcca1db8-04e9-425b-8a65-939c5afd4b07] Running
	I0308 02:57:26.727633  919714 system_pods.go:89] "kube-ingress-dns-minikube" [f3524dd5-1994-4650-80c6-d18fef44db57] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0308 02:57:26.727638  919714 system_pods.go:89] "kube-proxy-42bsl" [abbbc3f6-509a-44d1-9738-8e887f2d44af] Running
	I0308 02:57:26.727644  919714 system_pods.go:89] "kube-scheduler-addons-963897" [bdcc331d-9f63-4641-82d8-0a46de31cc53] Running
	I0308 02:57:26.727650  919714 system_pods.go:89] "metrics-server-69cf46c98-szqb7" [6456987a-f2c2-4dd8-9fd2-268027357dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 02:57:26.727656  919714 system_pods.go:89] "nvidia-device-plugin-daemonset-bcff4" [8fec37a2-1bb5-4f90-ada2-d022b6694cf3] Running
	I0308 02:57:26.727662  919714 system_pods.go:89] "registry-proxy-4snpq" [07f9d0bd-1ed8-4806-826e-1720b7cf2dbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0308 02:57:26.727675  919714 system_pods.go:89] "registry-rs9mh" [96e3cb85-f90b-45c0-b9d4-9c2c2da9ad88] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0308 02:57:26.727682  919714 system_pods.go:89] "snapshot-controller-58dbcc7b99-mwn74" [7bbae1e6-778b-43ab-bdb5-aca88430b5c4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0308 02:57:26.727689  919714 system_pods.go:89] "snapshot-controller-58dbcc7b99-ztckn" [4dc8ccf5-8fb7-44e9-a713-a955169da81f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0308 02:57:26.727720  919714 system_pods.go:89] "storage-provisioner" [58345b2b-7c94-4525-9265-4a479388495a] Running
	I0308 02:57:26.727727  919714 system_pods.go:89] "tiller-deploy-7b677967b9-gk6tb" [c562c869-b9e4-4778-b548-5329e8e7ff62] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0308 02:57:26.727733  919714 system_pods.go:126] duration metric: took 11.47638ms to wait for k8s-apps to be running ...
	I0308 02:57:26.727740  919714 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 02:57:26.727791  919714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 02:57:26.745907  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:26.746874  919714 system_svc.go:56] duration metric: took 19.125704ms WaitForService to wait for kubelet
	I0308 02:57:26.746900  919714 kubeadm.go:576] duration metric: took 19.727238174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 02:57:26.746923  919714 node_conditions.go:102] verifying NodePressure condition ...
	I0308 02:57:26.753978  919714 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 02:57:26.754016  919714 node_conditions.go:123] node cpu capacity is 2
	I0308 02:57:26.754036  919714 node_conditions.go:105] duration metric: took 7.101731ms to run NodePressure ...
	I0308 02:57:26.754052  919714 start.go:240] waiting for startup goroutines ...
	I0308 02:57:26.755942  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:27.176958  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:27.226283  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:27.244223  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:27.247771  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:27.677462  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:27.725600  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:27.743837  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:27.748138  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:28.177207  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:28.226414  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:28.244641  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:28.249127  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:28.676937  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:28.726689  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:28.745019  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:28.748658  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:29.177242  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:29.225617  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:29.245832  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:29.251942  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:29.677468  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:29.726187  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:29.747306  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:29.749434  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:30.177174  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:30.225766  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:30.244990  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:30.249212  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:30.680838  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:30.726346  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:30.750205  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:30.750500  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:31.178574  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:31.226095  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:31.243921  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:31.247648  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:31.678055  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:31.725469  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:31.745445  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:31.748361  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:32.177825  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:32.225590  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:32.244697  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:32.253760  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:32.676695  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:32.725974  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:32.745001  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:32.748131  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:33.177191  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:33.225848  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:33.246788  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:33.256054  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:33.678013  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:33.726111  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:33.744929  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:33.749089  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:34.177357  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:34.225679  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:34.250646  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:34.253084  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:34.874354  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:34.874934  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:34.876172  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:34.877013  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:35.177604  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:35.225704  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:35.243980  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:35.254808  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:35.676423  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:35.726195  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:35.744090  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:35.748743  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:36.176709  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:36.226333  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:36.243417  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:36.248017  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:36.676785  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:36.726508  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:36.743959  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:36.749129  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:37.176818  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:37.229217  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:37.243333  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:37.247782  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:37.676766  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:37.726467  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:37.744316  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:37.751690  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:38.176488  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:38.225620  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:38.245202  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:38.247955  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:38.681610  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:38.725786  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:38.750402  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:38.752285  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:39.176676  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:39.228389  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:39.255870  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:39.262741  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:39.677585  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:39.727055  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:39.747820  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:39.759538  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:40.177284  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:40.225486  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:40.243649  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:40.247944  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:40.677164  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:40.725819  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:40.745820  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:40.748454  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:41.178025  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:41.225832  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:41.244364  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:41.248471  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:41.677693  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:41.728033  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:41.744476  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:41.748906  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:42.357347  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:42.367588  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:42.368191  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:42.368309  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:42.677700  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:42.726540  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:42.744001  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:42.747723  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:43.177617  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:43.226672  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:43.244291  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:43.247596  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:43.678602  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:43.729928  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:43.744762  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:43.749730  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:44.176933  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:44.230220  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:44.257237  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0308 02:57:44.257868  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:44.684468  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:44.725679  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:44.743822  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:44.748214  919714 kapi.go:107] duration metric: took 27.50474791s to wait for kubernetes.io/minikube-addons=registry ...
	I0308 02:57:45.177822  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:45.227005  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:45.244774  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:45.679519  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:45.725716  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:45.743734  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:46.177197  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:46.226056  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:46.244036  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:46.677689  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:46.734173  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:46.752350  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:47.177515  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:47.228808  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:47.243719  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:47.677971  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:47.750181  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:47.760651  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:48.177093  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:48.225424  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:48.244069  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:48.677636  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:48.726605  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:48.744751  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:49.178412  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:49.225758  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:49.248984  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:49.678129  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:49.735650  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:49.743717  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:50.199715  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:50.245525  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:50.248037  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:50.677309  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:50.726251  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:50.743674  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:51.177136  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:51.226006  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:51.244020  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:51.677620  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:51.727368  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:51.744372  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:52.176622  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:52.225634  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:52.244297  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:52.677487  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:52.725696  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:52.743614  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:53.368485  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:53.368640  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:53.372893  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:53.676823  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:53.726720  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:53.744536  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:54.177177  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:54.235722  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:54.250439  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:54.677354  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:54.729886  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:54.746152  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:55.180726  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:55.225712  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:55.253480  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:55.676997  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:55.725890  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:55.744919  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:56.177317  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:56.229637  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:56.244016  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:56.677268  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:56.725748  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:56.744249  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:57.179151  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:57.224769  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:57.245048  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:57.676884  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:57.726635  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:57.744902  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:58.176566  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:58.229060  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:58.245043  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:58.676562  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:58.726626  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:58.744497  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:59.177991  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:59.227155  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:59.243927  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:57:59.677443  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:57:59.726245  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:57:59.744409  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:00.177312  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:00.226557  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:00.243748  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:00.677211  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:00.725670  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:00.744106  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:01.177162  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:01.228278  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:01.243471  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:01.678006  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:01.725957  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:01.745850  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:02.180235  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:02.226317  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:02.244882  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:02.676827  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:02.727471  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:02.744595  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:03.177528  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:03.225760  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:03.244005  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:03.677474  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:03.726325  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:03.744246  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:04.177225  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:04.225821  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:04.243727  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:04.678027  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:04.725655  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:04.744691  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:05.177043  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:05.225922  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:05.244538  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:05.693318  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:05.725047  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:05.744364  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:06.178718  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:06.225678  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:06.243976  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:06.677740  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:06.727422  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:06.745941  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:07.176876  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:07.226782  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:07.246902  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:07.678117  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:07.732617  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:07.746047  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:08.177340  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:08.225904  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:08.243732  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:08.676713  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:08.726356  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:08.743603  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:09.176544  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:09.226027  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:09.245375  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:09.677537  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:09.732778  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:09.748103  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:10.179339  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:10.226808  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:10.244811  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:10.679591  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:10.733337  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:10.748459  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:11.178105  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:11.231918  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:11.245697  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:11.676910  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:11.732228  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:11.743434  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:12.177129  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:12.225154  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:12.244433  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:12.676243  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:12.725484  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:12.743856  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:13.177064  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:13.225620  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:13.244008  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:13.677253  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:13.730714  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:13.749060  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:14.176576  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:14.225604  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:14.243741  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:14.676931  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:14.728690  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:14.743919  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:15.177034  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:15.233667  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:15.243926  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:15.677135  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:15.725386  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:15.752198  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:16.176274  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:16.225635  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:16.243856  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:16.676508  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:16.726460  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:16.743364  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:17.177641  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:17.225809  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:17.244536  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:17.677427  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:17.725915  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:17.744727  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:18.210097  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:18.225558  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:18.244036  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:18.677187  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:18.725872  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0308 02:58:18.743643  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:19.177146  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:19.226422  919714 kapi.go:107] duration metric: took 59.506821429s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0308 02:58:19.243650  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:19.676807  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:19.744406  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:20.177208  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:20.244688  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:20.677408  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:20.744599  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:21.177588  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:21.245270  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:21.676918  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:21.745496  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:22.177104  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:22.244321  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:22.677328  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:22.744712  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:23.178059  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:23.247099  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:23.679392  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:23.747904  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:24.177495  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:24.245164  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:24.677793  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:24.744420  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:25.177938  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:25.251090  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:25.677907  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:25.744160  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:26.177402  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:26.244483  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:26.677391  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:26.744443  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:27.176680  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:27.244487  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:27.676900  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:27.744671  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:28.177296  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:28.245070  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:28.677468  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:28.746116  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:29.178323  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:29.244660  919714 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0308 02:58:30.019922  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:30.020223  919714 kapi.go:107] duration metric: took 1m12.784233594s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0308 02:58:30.176871  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:30.676966  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:31.176674  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:31.677970  919714 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0308 02:58:32.177771  919714 kapi.go:107] duration metric: took 1m10.505033097s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0308 02:58:32.179455  919714 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-963897 cluster.
	I0308 02:58:32.180932  919714 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0308 02:58:32.182443  919714 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0308 02:58:32.183999  919714 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, yakd, storage-provisioner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0308 02:58:32.185388  919714 addons.go:505] duration metric: took 1m25.165679999s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns helm-tiller yakd storage-provisioner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0308 02:58:32.185430  919714 start.go:245] waiting for cluster config update ...
	I0308 02:58:32.185449  919714 start.go:254] writing updated cluster config ...
	I0308 02:58:32.185777  919714 ssh_runner.go:195] Run: rm -f paused
	I0308 02:58:32.240339  919714 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 02:58:32.241815  919714 out.go:177] * Done! kubectl is now configured to use "addons-963897" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.632168378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709866898632059433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563252,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d43f7af0-1284-4f13-b422-2aaeda53cc81 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.633482817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28762b49-fbcf-4368-ba1c-284ab57b5fdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.633536388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28762b49-fbcf-4368-ba1c-284ab57b5fdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.633959675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cff33195bf4690f89c8ee06e2b9f4eb0a2ff0de46704a667a63d14403714f087,PodSandboxId:dfad4c18a76861c3d9a19e212853533625675c06cdc7fa3c82f3cca294fe0821,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709866889924964590,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9rvd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc4a5fd6-b444-439b-8c65-26c5f9edd8cb,},Annotations:map[string]string{io.kubernetes.container.hash: 67bcac49,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d1f60f2806a2d56241400a52423f522fffc3f316c1f1e136ebe49b0b6c582a,PodSandboxId:0c6cadd546f90658f28372b2f3d2dc06bfd63b33e6cca807bbc73162057a0cd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709866751841087301,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf76ad91-e4c0-4d06-b04c-597192b9dea0,},Annotations:map[string]string{io.kubern
etes.container.hash: 5f714949,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c865889998eeb5b0052d3f788258c3c19b3eaa88a61185d0ca5f106ab67f7a50,PodSandboxId:a15cbf87199bfceb0051193f79dbf9c75ed89937dde04d3b48356bd13ee9716e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709866735273769244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-frnvt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: f8ff87d5-f64c-4696-97eb-f95b48854ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bb0e4c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb5643532970948978d771e61b1374ecf673c505893d7febc278d04823ef100,PodSandboxId:f994f9ba06366b74c45754226c1ae70434ee73af6e6e773787f75e9611cd0f8d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709866711276581671,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-vw4fb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fcf41548-0300-495a-8895-f428f78385e0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a61b458,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6519436f913efd3f881b2b4b27248a5501e43c8bd620d0b36f45d36b0a960ac4,PodSandboxId:cbfbc4974a7ffcbca6d6ad2e908eef9aa87f26ff3754c78b8d31106db2d99314,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1709866684051181305,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fsq8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dca58fdd-2331-4eec-bd17-368512009190,},Annotations:map[string]string{io.kubernetes.container.hash: acb1417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de77131957ed8c48cdbc05fcc0de63afcdb548cf2cc74eaff4a46097b8c26c4,PodSandboxId:52fd59cfb18631d0f2005e1cdaf9409c9d40ccde085a8355815c46f2340e9b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1709866683920247732,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pskg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d7f30e9-ae20-4a20-bf40-d87a6869e74c,},Annotations:map[string]string{io.kubernetes.container.hash: aeb8045a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ec4c3ab3df06b30266bdff644f5f574c19a8648b87478728990e4fee1b1007,PodSandboxId:eaf37dbbfdf39612368261f492222bad956112a51252d885728f7b9a8dc54867,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709866681539017053,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-62pxk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0eff270d-61f4-4227-a0b2-996e1279ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3b447,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002b668bc7feb37268855db00c8271cb95481dac0e08d2352e63823d3631a30a,PodSandboxId:4e4ca259856362b661e14f534c9fdeafe9f30023673f5f9372c6174f2cf91a9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709866636677024018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58345b2b-7c94-4525-9265-4a479388495a,},Annotations:map[string]string{io.kubernetes.container.hash: 442252a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709a76f2590f4d2e5c361f64e72cb0265d93ee4db364ceee8e52ec3fbf8bd647,PodSandboxId:fac57b6b7030763e9ff6da9097c70ae49a492652fb1cd56e6a73ebdd3fe7aa70,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709866630858649408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nkp5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d3c210e-fccf-4b3d-8c1c-e33ac749ad72,},Annotations:map[string]string{io.kubernetes.container.hash: c2d831ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9e42d76ac0f6c0519204d7ba59c9a6472afc97e475cac4ed4e58af8742ad16,PodSandboxId:db252337ba34d516c3e8361634141d58abfcc97e0f8e4e5f04d9193c5b1c4e
cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709866630068639052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42bsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbbc3f6-509a-44d1-9738-8e887f2d44af,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7dc3c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cca747b404f664d464e11c604340e40868268ef92f93b0c88d3d6317f59046,PodSandboxId:06e099d32516c13e201f99da00e33bb786029cd80459743791d98bc6d9a7c6ad,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709866607655673638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85403d95e60f9a9dc3ef2ed67fd5411d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cb7817ca20c6bd9b3cbfadd39f89a563443d3644c5eccb5c07506d0a26ba05,PodSandboxId:06f9645d7dfdda022cf7876b74503796334190a3abf937fb9a94337ab8289c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709866607658130492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f694ae937869f0ba5da99da76dedeb,},Annotations:map[string]string{io.kubernetes.container.hash: 5ca946ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c06a7b9a5f44ecd194f0a47ca2b320c2211bb7f95a9285e132ff8ae26ae8e,PodSandboxId:b230f495c83fc0e3ebb732bd96dbbd487fd732110463b63d08b163c6f0d13cf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709866607592796751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047a36b9209058de88b31bc9ebf42a99,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69499a1279bd4b430fc6247312f1d414808c0da9418f3b7c73b9f142393aea9,PodSandboxId:057ad8dc686986cc326f144821305dab7a5735721ec5ec1e204a512007a690ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709866607510288824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a3409abf53fa052292a65ab7561bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f00463a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28762b49-fbcf-4368-ba1c-284ab57b5fdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.677226645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0f177d9-be1a-4867-8b77-cfed5e0c3477 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.677345648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0f177d9-be1a-4867-8b77-cfed5e0c3477 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.678718718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32d78ed3-6dfb-48ee-bbf3-5ef5861730ad name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.680016113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709866898679989848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563252,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32d78ed3-6dfb-48ee-bbf3-5ef5861730ad name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.681004855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5416cafa-0c9e-4c27-a4e5-84607781bc32 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.681062653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5416cafa-0c9e-4c27-a4e5-84607781bc32 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.681372492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cff33195bf4690f89c8ee06e2b9f4eb0a2ff0de46704a667a63d14403714f087,PodSandboxId:dfad4c18a76861c3d9a19e212853533625675c06cdc7fa3c82f3cca294fe0821,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709866889924964590,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9rvd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc4a5fd6-b444-439b-8c65-26c5f9edd8cb,},Annotations:map[string]string{io.kubernetes.container.hash: 67bcac49,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d1f60f2806a2d56241400a52423f522fffc3f316c1f1e136ebe49b0b6c582a,PodSandboxId:0c6cadd546f90658f28372b2f3d2dc06bfd63b33e6cca807bbc73162057a0cd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709866751841087301,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf76ad91-e4c0-4d06-b04c-597192b9dea0,},Annotations:map[string]string{io.kubern
etes.container.hash: 5f714949,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c865889998eeb5b0052d3f788258c3c19b3eaa88a61185d0ca5f106ab67f7a50,PodSandboxId:a15cbf87199bfceb0051193f79dbf9c75ed89937dde04d3b48356bd13ee9716e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709866735273769244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-frnvt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: f8ff87d5-f64c-4696-97eb-f95b48854ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bb0e4c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb5643532970948978d771e61b1374ecf673c505893d7febc278d04823ef100,PodSandboxId:f994f9ba06366b74c45754226c1ae70434ee73af6e6e773787f75e9611cd0f8d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709866711276581671,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-vw4fb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fcf41548-0300-495a-8895-f428f78385e0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a61b458,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6519436f913efd3f881b2b4b27248a5501e43c8bd620d0b36f45d36b0a960ac4,PodSandboxId:cbfbc4974a7ffcbca6d6ad2e908eef9aa87f26ff3754c78b8d31106db2d99314,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1709866684051181305,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fsq8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dca58fdd-2331-4eec-bd17-368512009190,},Annotations:map[string]string{io.kubernetes.container.hash: acb1417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de77131957ed8c48cdbc05fcc0de63afcdb548cf2cc74eaff4a46097b8c26c4,PodSandboxId:52fd59cfb18631d0f2005e1cdaf9409c9d40ccde085a8355815c46f2340e9b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1709866683920247732,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pskg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d7f30e9-ae20-4a20-bf40-d87a6869e74c,},Annotations:map[string]string{io.kubernetes.container.hash: aeb8045a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ec4c3ab3df06b30266bdff644f5f574c19a8648b87478728990e4fee1b1007,PodSandboxId:eaf37dbbfdf39612368261f492222bad956112a51252d885728f7b9a8dc54867,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709866681539017053,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-62pxk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0eff270d-61f4-4227-a0b2-996e1279ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3b447,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002b668bc7feb37268855db00c8271cb95481dac0e08d2352e63823d3631a30a,PodSandboxId:4e4ca259856362b661e14f534c9fdeafe9f30023673f5f9372c6174f2cf91a9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709866636677024018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58345b2b-7c94-4525-9265-4a479388495a,},Annotations:map[string]string{io.kubernetes.container.hash: 442252a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709a76f2590f4d2e5c361f64e72cb0265d93ee4db364ceee8e52ec3fbf8bd647,PodSandboxId:fac57b6b7030763e9ff6da9097c70ae49a492652fb1cd56e6a73ebdd3fe7aa70,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709866630858649408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nkp5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d3c210e-fccf-4b3d-8c1c-e33ac749ad72,},Annotations:map[string]string{io.kubernetes.container.hash: c2d831ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9e42d76ac0f6c0519204d7ba59c9a6472afc97e475cac4ed4e58af8742ad16,PodSandboxId:db252337ba34d516c3e8361634141d58abfcc97e0f8e4e5f04d9193c5b1c4e
cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709866630068639052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42bsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbbc3f6-509a-44d1-9738-8e887f2d44af,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7dc3c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cca747b404f664d464e11c604340e40868268ef92f93b0c88d3d6317f59046,PodSandboxId:06e099d32516c13e201f99da00e33bb786029cd80459743791d98bc6d9a7c6ad,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709866607655673638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85403d95e60f9a9dc3ef2ed67fd5411d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cb7817ca20c6bd9b3cbfadd39f89a563443d3644c5eccb5c07506d0a26ba05,PodSandboxId:06f9645d7dfdda022cf7876b74503796334190a3abf937fb9a94337ab8289c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709866607658130492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f694ae937869f0ba5da99da76dedeb,},Annotations:map[string]string{io.kubernetes.container.hash: 5ca946ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c06a7b9a5f44ecd194f0a47ca2b320c2211bb7f95a9285e132ff8ae26ae8e,PodSandboxId:b230f495c83fc0e3ebb732bd96dbbd487fd732110463b63d08b163c6f0d13cf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709866607592796751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047a36b9209058de88b31bc9ebf42a99,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69499a1279bd4b430fc6247312f1d414808c0da9418f3b7c73b9f142393aea9,PodSandboxId:057ad8dc686986cc326f144821305dab7a5735721ec5ec1e204a512007a690ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709866607510288824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a3409abf53fa052292a65ab7561bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f00463a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5416cafa-0c9e-4c27-a4e5-84607781bc32 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.715332022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6826b642-5d3a-49fd-a71a-55e59ce974dd name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.715408390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6826b642-5d3a-49fd-a71a-55e59ce974dd name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.721210280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e9c9dbf-e6f7-419c-bc0a-9b781f4e6f1f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.722930829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709866898722896046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563252,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e9c9dbf-e6f7-419c-bc0a-9b781f4e6f1f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.728305940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d1ae164-185c-48e9-94c3-bdfd3009dd1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.728742895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d1ae164-185c-48e9-94c3-bdfd3009dd1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.729686255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cff33195bf4690f89c8ee06e2b9f4eb0a2ff0de46704a667a63d14403714f087,PodSandboxId:dfad4c18a76861c3d9a19e212853533625675c06cdc7fa3c82f3cca294fe0821,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709866889924964590,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9rvd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc4a5fd6-b444-439b-8c65-26c5f9edd8cb,},Annotations:map[string]string{io.kubernetes.container.hash: 67bcac49,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d1f60f2806a2d56241400a52423f522fffc3f316c1f1e136ebe49b0b6c582a,PodSandboxId:0c6cadd546f90658f28372b2f3d2dc06bfd63b33e6cca807bbc73162057a0cd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709866751841087301,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf76ad91-e4c0-4d06-b04c-597192b9dea0,},Annotations:map[string]string{io.kubern
etes.container.hash: 5f714949,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c865889998eeb5b0052d3f788258c3c19b3eaa88a61185d0ca5f106ab67f7a50,PodSandboxId:a15cbf87199bfceb0051193f79dbf9c75ed89937dde04d3b48356bd13ee9716e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709866735273769244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-frnvt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: f8ff87d5-f64c-4696-97eb-f95b48854ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bb0e4c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb5643532970948978d771e61b1374ecf673c505893d7febc278d04823ef100,PodSandboxId:f994f9ba06366b74c45754226c1ae70434ee73af6e6e773787f75e9611cd0f8d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709866711276581671,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-vw4fb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fcf41548-0300-495a-8895-f428f78385e0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a61b458,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6519436f913efd3f881b2b4b27248a5501e43c8bd620d0b36f45d36b0a960ac4,PodSandboxId:cbfbc4974a7ffcbca6d6ad2e908eef9aa87f26ff3754c78b8d31106db2d99314,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1709866684051181305,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fsq8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dca58fdd-2331-4eec-bd17-368512009190,},Annotations:map[string]string{io.kubernetes.container.hash: acb1417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de77131957ed8c48cdbc05fcc0de63afcdb548cf2cc74eaff4a46097b8c26c4,PodSandboxId:52fd59cfb18631d0f2005e1cdaf9409c9d40ccde085a8355815c46f2340e9b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1709866683920247732,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pskg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d7f30e9-ae20-4a20-bf40-d87a6869e74c,},Annotations:map[string]string{io.kubernetes.container.hash: aeb8045a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ec4c3ab3df06b30266bdff644f5f574c19a8648b87478728990e4fee1b1007,PodSandboxId:eaf37dbbfdf39612368261f492222bad956112a51252d885728f7b9a8dc54867,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709866681539017053,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-62pxk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0eff270d-61f4-4227-a0b2-996e1279ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3b447,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002b668bc7feb37268855db00c8271cb95481dac0e08d2352e63823d3631a30a,PodSandboxId:4e4ca259856362b661e14f534c9fdeafe9f30023673f5f9372c6174f2cf91a9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709866636677024018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58345b2b-7c94-4525-9265-4a479388495a,},Annotations:map[string]string{io.kubernetes.container.hash: 442252a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709a76f2590f4d2e5c361f64e72cb0265d93ee4db364ceee8e52ec3fbf8bd647,PodSandboxId:fac57b6b7030763e9ff6da9097c70ae49a492652fb1cd56e6a73ebdd3fe7aa70,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709866630858649408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nkp5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d3c210e-fccf-4b3d-8c1c-e33ac749ad72,},Annotations:map[string]string{io.kubernetes.container.hash: c2d831ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9e42d76ac0f6c0519204d7ba59c9a6472afc97e475cac4ed4e58af8742ad16,PodSandboxId:db252337ba34d516c3e8361634141d58abfcc97e0f8e4e5f04d9193c5b1c4e
cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709866630068639052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42bsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbbc3f6-509a-44d1-9738-8e887f2d44af,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7dc3c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cca747b404f664d464e11c604340e40868268ef92f93b0c88d3d6317f59046,PodSandboxId:06e099d32516c13e201f99da00e33bb786029cd80459743791d98bc6d9a7c6ad,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709866607655673638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85403d95e60f9a9dc3ef2ed67fd5411d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cb7817ca20c6bd9b3cbfadd39f89a563443d3644c5eccb5c07506d0a26ba05,PodSandboxId:06f9645d7dfdda022cf7876b74503796334190a3abf937fb9a94337ab8289c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709866607658130492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f694ae937869f0ba5da99da76dedeb,},Annotations:map[string]string{io.kubernetes.container.hash: 5ca946ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c06a7b9a5f44ecd194f0a47ca2b320c2211bb7f95a9285e132ff8ae26ae8e,PodSandboxId:b230f495c83fc0e3ebb732bd96dbbd487fd732110463b63d08b163c6f0d13cf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709866607592796751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047a36b9209058de88b31bc9ebf42a99,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69499a1279bd4b430fc6247312f1d414808c0da9418f3b7c73b9f142393aea9,PodSandboxId:057ad8dc686986cc326f144821305dab7a5735721ec5ec1e204a512007a690ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709866607510288824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a3409abf53fa052292a65ab7561bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f00463a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d1ae164-185c-48e9-94c3-bdfd3009dd1e name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.774633417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=920e19db-c75e-4127-88cc-139ad4019283 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.774738407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=920e19db-c75e-4127-88cc-139ad4019283 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.776104600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66546d0a-4883-43a5-b080-44664c52e0d3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.778372904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709866898778340871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563252,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66546d0a-4883-43a5-b080-44664c52e0d3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.779253185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af6a56c5-d9f8-4f25-a0e2-c4c1fc5522ba name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.779331227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af6a56c5-d9f8-4f25-a0e2-c4c1fc5522ba name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:01:38 addons-963897 crio[678]: time="2024-03-08 03:01:38.779635783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cff33195bf4690f89c8ee06e2b9f4eb0a2ff0de46704a667a63d14403714f087,PodSandboxId:dfad4c18a76861c3d9a19e212853533625675c06cdc7fa3c82f3cca294fe0821,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709866889924964590,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9rvd2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc4a5fd6-b444-439b-8c65-26c5f9edd8cb,},Annotations:map[string]string{io.kubernetes.container.hash: 67bcac49,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d1f60f2806a2d56241400a52423f522fffc3f316c1f1e136ebe49b0b6c582a,PodSandboxId:0c6cadd546f90658f28372b2f3d2dc06bfd63b33e6cca807bbc73162057a0cd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709866751841087301,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bf76ad91-e4c0-4d06-b04c-597192b9dea0,},Annotations:map[string]string{io.kubern
etes.container.hash: 5f714949,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c865889998eeb5b0052d3f788258c3c19b3eaa88a61185d0ca5f106ab67f7a50,PodSandboxId:a15cbf87199bfceb0051193f79dbf9c75ed89937dde04d3b48356bd13ee9716e,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709866735273769244,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-frnvt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: f8ff87d5-f64c-4696-97eb-f95b48854ffb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bb0e4c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb5643532970948978d771e61b1374ecf673c505893d7febc278d04823ef100,PodSandboxId:f994f9ba06366b74c45754226c1ae70434ee73af6e6e773787f75e9611cd0f8d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709866711276581671,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-vw4fb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: fcf41548-0300-495a-8895-f428f78385e0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a61b458,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6519436f913efd3f881b2b4b27248a5501e43c8bd620d0b36f45d36b0a960ac4,PodSandboxId:cbfbc4974a7ffcbca6d6ad2e908eef9aa87f26ff3754c78b8d31106db2d99314,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1709866684051181305,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fsq8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dca58fdd-2331-4eec-bd17-368512009190,},Annotations:map[string]string{io.kubernetes.container.hash: acb1417b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de77131957ed8c48cdbc05fcc0de63afcdb548cf2cc74eaff4a46097b8c26c4,PodSandboxId:52fd59cfb18631d0f2005e1cdaf9409c9d40ccde085a8355815c46f2340e9b10,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1709866683920247732,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pskg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d7f30e9-ae20-4a20-bf40-d87a6869e74c,},Annotations:map[string]string{io.kubernetes.container.hash: aeb8045a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ec4c3ab3df06b30266bdff644f5f574c19a8648b87478728990e4fee1b1007,PodSandboxId:eaf37dbbfdf39612368261f492222bad956112a51252d885728f7b9a8dc54867,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709866681539017053,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-62pxk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0eff270d-61f4-4227-a0b2-996e1279ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: bfa3b447,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:002b668bc7feb37268855db00c8271cb95481dac0e08d2352e63823d3631a30a,PodSandboxId:4e4ca259856362b661e14f534c9fdeafe9f30023673f5f9372c6174f2cf91a9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709866636677024018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58345b2b-7c94-4525-9265-4a479388495a,},Annotations:map[string]string{io.kubernetes.container.hash: 442252a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709a76f2590f4d2e5c361f64e72cb0265d93ee4db364ceee8e52ec3fbf8bd647,PodSandboxId:fac57b6b7030763e9ff6da9097c70ae49a492652fb1cd56e6a73ebdd3fe7aa70,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709866630858649408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-nkp5t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d3c210e-fccf-4b3d-8c1c-e33ac749ad72,},Annotations:map[string]string{io.kubernetes.container.hash: c2d831ed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9e42d76ac0f6c0519204d7ba59c9a6472afc97e475cac4ed4e58af8742ad16,PodSandboxId:db252337ba34d516c3e8361634141d58abfcc97e0f8e4e5f04d9193c5b1c4e
cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709866630068639052,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-42bsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbbc3f6-509a-44d1-9738-8e887f2d44af,},Annotations:map[string]string{io.kubernetes.container.hash: 4b7dc3c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cca747b404f664d464e11c604340e40868268ef92f93b0c88d3d6317f59046,PodSandboxId:06e099d32516c13e201f99da00e33bb786029cd80459743791d98bc6d9a7c6ad,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709866607655673638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85403d95e60f9a9dc3ef2ed67fd5411d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cb7817ca20c6bd9b3cbfadd39f89a563443d3644c5eccb5c07506d0a26ba05,PodSandboxId:06f9645d7dfdda022cf7876b74503796334190a3abf937fb9a94337ab8289c3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709866607658130492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6f694ae937869f0ba5da99da76dedeb,},Annotations:map[string]string{io.kubernetes.container.hash: 5ca946ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028c06a7b9a5f44ecd194f0a47ca2b320c2211bb7f95a9285e132ff8ae26ae8e,PodSandboxId:b230f495c83fc0e3ebb732bd96dbbd487fd732110463b63d08b163c6f0d13cf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709866607592796751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 047a36b9209058de88b31bc9ebf42a99,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b69499a1279bd4b430fc6247312f1d414808c0da9418f3b7c73b9f142393aea9,PodSandboxId:057ad8dc686986cc326f144821305dab7a5735721ec5ec1e204a512007a690ae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709866607510288824,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-963897,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a3409abf53fa052292a65ab7561bc1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f00463a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af6a56c5-d9f8-4f25-a0e2-c4c1fc5522ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cff33195bf469       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   dfad4c18a7686       hello-world-app-5d77478584-9rvd2
	98d1f60f2806a       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   0c6cadd546f90       nginx
	c865889998eeb       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   a15cbf87199bf       headlamp-7ddfbb94ff-frnvt
	1eb5643532970       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 3 minutes ago       Running             gcp-auth                  0                   f994f9ba06366       gcp-auth-5f6b4f85fd-vw4fb
	6519436f913ef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   cbfbc4974a7ff       ingress-nginx-admission-patch-fsq8k
	6de77131957ed       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   52fd59cfb1863       ingress-nginx-admission-create-6pskg
	32ec4c3ab3df0       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   eaf37dbbfdf39       yakd-dashboard-9947fc6bf-62pxk
	002b668bc7feb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   4e4ca25985636       storage-provisioner
	709a76f2590f4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   fac57b6b70307       coredns-5dd5756b68-nkp5t
	be9e42d76ac0f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   db252337ba34d       kube-proxy-42bsl
	32cb7817ca20c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   06f9645d7dfdd       etcd-addons-963897
	03cca747b404f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   06e099d32516c       kube-scheduler-addons-963897
	028c06a7b9a5f       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   b230f495c83fc       kube-controller-manager-addons-963897
	b69499a1279bd       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   057ad8dc68698       kube-apiserver-addons-963897
	
	
	==> coredns [709a76f2590f4d2e5c361f64e72cb0265d93ee4db364ceee8e52ec3fbf8bd647] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59861 - 64726 "HINFO IN 3007398451578941380.5310243460152658968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009965208s
	[INFO] 10.244.0.22:58228 - 32779 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000526377s
	[INFO] 10.244.0.22:35092 - 52039 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000674705s
	[INFO] 10.244.0.22:45099 - 36478 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071391s
	[INFO] 10.244.0.22:42977 - 12217 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123276s
	[INFO] 10.244.0.22:51627 - 17781 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111199s
	[INFO] 10.244.0.22:43688 - 40953 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133626s
	[INFO] 10.244.0.22:55158 - 31343 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002983725s
	[INFO] 10.244.0.22:40388 - 31986 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.003223552s
	[INFO] 10.244.0.26:41862 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000311607s
	[INFO] 10.244.0.26:55195 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000464434s
	
	
	==> describe nodes <==
	Name:               addons-963897
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-963897
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=addons-963897
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T02_56_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-963897
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 02:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-963897
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:01:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 02:59:27 +0000   Fri, 08 Mar 2024 02:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 02:59:27 +0000   Fri, 08 Mar 2024 02:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 02:59:27 +0000   Fri, 08 Mar 2024 02:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 02:59:27 +0000   Fri, 08 Mar 2024 02:56:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    addons-963897
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 a22509ad08db4a689911a1037c22bfff
	  System UUID:                a22509ad-08db-4a68-9911-a1037c22bfff
	  Boot ID:                    b85bf6c5-4a4b-484e-ae3c-8e268948bf4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9rvd2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-5f6b4f85fd-vw4fb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  headlamp                    headlamp-7ddfbb94ff-frnvt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 coredns-5dd5756b68-nkp5t                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m33s
	  kube-system                 etcd-addons-963897                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-apiserver-addons-963897             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-controller-manager-addons-963897    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-42bsl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-scheduler-addons-963897             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-62pxk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m53s)  kubelet          Node addons-963897 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m53s)  kubelet          Node addons-963897 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m53s)  kubelet          Node addons-963897 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node addons-963897 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node addons-963897 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node addons-963897 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m45s                  kubelet          Node addons-963897 status is now: NodeReady
	  Normal  RegisteredNode           4m33s                  node-controller  Node addons-963897 event: Registered Node addons-963897 in Controller
	
	
	==> dmesg <==
	[Mar 8 02:57] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[  +0.122095] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.028183] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.139339] kauditd_printk_skb: 110 callbacks suppressed
	[  +7.672218] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.485623] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.960629] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.489916] kauditd_printk_skb: 9 callbacks suppressed
	[Mar 8 02:58] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.238938] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.190739] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.401408] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.775042] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.026590] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.670006] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.897998] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.140350] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.397305] kauditd_printk_skb: 16 callbacks suppressed
	[Mar 8 02:59] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.337724] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.187120] kauditd_printk_skb: 33 callbacks suppressed
	[ +19.978244] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.197983] kauditd_printk_skb: 25 callbacks suppressed
	[Mar 8 03:01] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.709545] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [32cb7817ca20c6bd9b3cbfadd39f89a563443d3644c5eccb5c07506d0a26ba05] <==
	{"level":"warn","ts":"2024-03-08T02:58:18.20164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"277.954949ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:58:18.202367Z","caller":"traceutil/trace.go:171","msg":"trace[283928423] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1140; }","duration":"278.682739ms","start":"2024-03-08T02:58:17.923671Z","end":"2024-03-08T02:58:18.202354Z","steps":["trace[283928423] 'count revisions from in-memory index tree'  (duration: 277.868174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:58:18.203263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.274878ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:58:18.203338Z","caller":"traceutil/trace.go:171","msg":"trace[153909522] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:1140; }","duration":"352.353521ms","start":"2024-03-08T02:58:17.850971Z","end":"2024-03-08T02:58:18.203325Z","steps":["trace[153909522] 'count revisions from in-memory index tree'  (duration: 350.774122ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:58:18.203394Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:58:17.850958Z","time spent":"352.424987ms","remote":"127.0.0.1:50102","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	{"level":"info","ts":"2024-03-08T02:58:25.969177Z","caller":"traceutil/trace.go:171","msg":"trace[1239629657] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"146.804971ms","start":"2024-03-08T02:58:25.822354Z","end":"2024-03-08T02:58:25.969159Z","steps":["trace[1239629657] 'process raft request'  (duration: 143.987669ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:58:29.998586Z","caller":"traceutil/trace.go:171","msg":"trace[914139891] linearizableReadLoop","detail":"{readStateIndex:1205; appliedIndex:1204; }","duration":"330.669932ms","start":"2024-03-08T02:58:29.667903Z","end":"2024-03-08T02:58:29.998573Z","steps":["trace[914139891] 'read index received'  (duration: 330.408437ms)","trace[914139891] 'applied index is now lower than readState.Index'  (duration: 260.951µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T02:58:29.998987Z","caller":"traceutil/trace.go:171","msg":"trace[198858393] transaction","detail":"{read_only:false; response_revision:1169; number_of_response:1; }","duration":"334.742046ms","start":"2024-03-08T02:58:29.664233Z","end":"2024-03-08T02:58:29.998975Z","steps":["trace[198858393] 'process raft request'  (duration: 334.122511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:58:30.00898Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:58:29.66422Z","time spent":"344.697935ms","remote":"127.0.0.1:50138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5882,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-jhtkl\" mod_revision:733 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-jhtkl\" value_size:5804 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-76dc478dd8-jhtkl\" > >"}
	{"level":"warn","ts":"2024-03-08T02:58:29.999172Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.269121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10814"}
	{"level":"info","ts":"2024-03-08T02:58:30.009072Z","caller":"traceutil/trace.go:171","msg":"trace[754374011] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1169; }","duration":"341.179349ms","start":"2024-03-08T02:58:29.667884Z","end":"2024-03-08T02:58:30.009064Z","steps":["trace[754374011] 'agreement among raft nodes before linearized reading'  (duration: 331.222858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:58:30.009093Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:58:29.667868Z","time spent":"341.219843ms","remote":"127.0.0.1:50138","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10838,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-03-08T02:58:30.00586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.406106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13961"}
	{"level":"info","ts":"2024-03-08T02:58:30.009169Z","caller":"traceutil/trace.go:171","msg":"trace[472892580] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1169; }","duration":"273.825214ms","start":"2024-03-08T02:58:29.735339Z","end":"2024-03-08T02:58:30.009164Z","steps":["trace[472892580] 'agreement among raft nodes before linearized reading'  (duration: 270.37164ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:00.659778Z","caller":"traceutil/trace.go:171","msg":"trace[2099016851] linearizableReadLoop","detail":"{readStateIndex:1516; appliedIndex:1515; }","duration":"310.772732ms","start":"2024-03-08T02:59:00.348991Z","end":"2024-03-08T02:59:00.659764Z","steps":["trace[2099016851] 'read index received'  (duration: 310.652398ms)","trace[2099016851] 'applied index is now lower than readState.Index'  (duration: 119.881µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T02:59:00.66025Z","caller":"traceutil/trace.go:171","msg":"trace[2086813999] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"412.717979ms","start":"2024-03-08T02:59:00.247521Z","end":"2024-03-08T02:59:00.660239Z","steps":["trace[2086813999] 'process raft request'  (duration: 412.167843ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:00.660386Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:59:00.247507Z","time spent":"412.780765ms","remote":"127.0.0.1:50118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1453 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-08T02:59:00.660563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.585294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T02:59:00.660596Z","caller":"traceutil/trace.go:171","msg":"trace[953452489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1471; }","duration":"311.612948ms","start":"2024-03-08T02:59:00.348967Z","end":"2024-03-08T02:59:00.66058Z","steps":["trace[953452489] 'agreement among raft nodes before linearized reading'  (duration: 311.529005ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:00.660618Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:59:00.348952Z","time spent":"311.66226ms","remote":"127.0.0.1:49926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-08T02:59:00.660725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.607188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:501"}
	{"level":"info","ts":"2024-03-08T02:59:00.660739Z","caller":"traceutil/trace.go:171","msg":"trace[1127618719] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1471; }","duration":"219.620943ms","start":"2024-03-08T02:59:00.441113Z","end":"2024-03-08T02:59:00.660734Z","steps":["trace[1127618719] 'agreement among raft nodes before linearized reading'  (duration: 219.590251ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:01.83024Z","caller":"traceutil/trace.go:171","msg":"trace[1252359538] transaction","detail":"{read_only:false; response_revision:1473; number_of_response:1; }","duration":"169.911017ms","start":"2024-03-08T02:59:01.660316Z","end":"2024-03-08T02:59:01.830227Z","steps":["trace[1252359538] 'process raft request'  (duration: 169.83434ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T02:59:05.360459Z","caller":"traceutil/trace.go:171","msg":"trace[516545508] transaction","detail":"{read_only:false; response_revision:1488; number_of_response:1; }","duration":"400.437179ms","start":"2024-03-08T02:59:04.960002Z","end":"2024-03-08T02:59:05.360439Z","steps":["trace[516545508] 'process raft request'  (duration: 400.284773ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T02:59:05.360636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T02:59:04.959987Z","time spent":"400.537204ms","remote":"127.0.0.1:50138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3014,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/task-pv-pod\" mod_revision:1468 > success:<request_put:<key:\"/registry/pods/default/task-pv-pod\" value_size:2972 >> failure:<request_range:<key:\"/registry/pods/default/task-pv-pod\" > >"}
	
	
	==> gcp-auth [1eb5643532970948978d771e61b1374ecf673c505893d7febc278d04823ef100] <==
	2024/03/08 02:58:31 GCP Auth Webhook started!
	2024/03/08 02:58:32 Ready to marshal response ...
	2024/03/08 02:58:32 Ready to write response ...
	2024/03/08 02:58:32 Ready to marshal response ...
	2024/03/08 02:58:32 Ready to write response ...
	2024/03/08 02:58:42 Ready to marshal response ...
	2024/03/08 02:58:42 Ready to write response ...
	2024/03/08 02:58:43 Ready to marshal response ...
	2024/03/08 02:58:43 Ready to write response ...
	2024/03/08 02:58:49 Ready to marshal response ...
	2024/03/08 02:58:49 Ready to write response ...
	2024/03/08 02:58:49 Ready to marshal response ...
	2024/03/08 02:58:49 Ready to write response ...
	2024/03/08 02:58:49 Ready to marshal response ...
	2024/03/08 02:58:49 Ready to write response ...
	2024/03/08 02:58:58 Ready to marshal response ...
	2024/03/08 02:58:58 Ready to write response ...
	2024/03/08 02:59:01 Ready to marshal response ...
	2024/03/08 02:59:01 Ready to write response ...
	2024/03/08 02:59:09 Ready to marshal response ...
	2024/03/08 02:59:09 Ready to write response ...
	2024/03/08 02:59:26 Ready to marshal response ...
	2024/03/08 02:59:26 Ready to write response ...
	2024/03/08 03:01:27 Ready to marshal response ...
	2024/03/08 03:01:27 Ready to write response ...
	
	
	==> kernel <==
	 03:01:39 up 5 min,  0 users,  load average: 0.50, 1.12, 0.59
	Linux addons-963897 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b69499a1279bd4b430fc6247312f1d414808c0da9418f3b7c73b9f142393aea9] <==
	I0308 02:59:09.370322       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0308 02:59:09.554469       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.110.200"}
	W0308 02:59:09.714109       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0308 02:59:11.701345       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0308 02:59:41.981100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:41.981149       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.002678       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.002712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.015560       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.015638       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.030263       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.030337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.056056       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.056112       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.056207       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.056257       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.062311       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.062378       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0308 02:59:42.075674       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0308 02:59:42.075741       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0308 02:59:43.056649       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0308 02:59:43.076737       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0308 02:59:43.087725       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0308 02:59:51.225899       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0308 03:01:28.089589       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.62.204"}
	
	
	==> kube-controller-manager [028c06a7b9a5f44ecd194f0a47ca2b320c2211bb7f95a9285e132ff8ae26ae8e] <==
	W0308 03:00:21.541056       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:00:21.541285       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:00:49.046578       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:00:49.046732       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:00:56.744124       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:00:56.744167       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:01:04.702255       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:01:04.702317       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:01:13.473074       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:01:13.473145       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0308 03:01:27.240999       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:01:27.241225       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0308 03:01:27.827216       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0308 03:01:27.877299       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9rvd2"
	I0308 03:01:27.898312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.077835ms"
	I0308 03:01:27.915268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.989056ms"
	I0308 03:01:27.915420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.63µs"
	I0308 03:01:27.923332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.547µs"
	I0308 03:01:30.738076       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0308 03:01:30.738337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="4.425µs"
	I0308 03:01:30.743976       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0308 03:01:30.997971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.287855ms"
	I0308 03:01:30.998918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="100.32µs"
	W0308 03:01:37.422745       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0308 03:01:37.422954       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [be9e42d76ac0f6c0519204d7ba59c9a6472afc97e475cac4ed4e58af8742ad16] <==
	I0308 02:57:11.168672       1 server_others.go:69] "Using iptables proxy"
	I0308 02:57:11.185977       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I0308 02:57:11.325922       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 02:57:11.325943       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 02:57:11.331241       1 server_others.go:152] "Using iptables Proxier"
	I0308 02:57:11.331278       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 02:57:11.331476       1 server.go:846] "Version info" version="v1.28.4"
	I0308 02:57:11.331485       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 02:57:11.333440       1 config.go:188] "Starting service config controller"
	I0308 02:57:11.333505       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 02:57:11.333526       1 config.go:97] "Starting endpoint slice config controller"
	I0308 02:57:11.333530       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 02:57:11.336315       1 config.go:315] "Starting node config controller"
	I0308 02:57:11.336322       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 02:57:11.433701       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 02:57:11.433759       1 shared_informer.go:318] Caches are synced for service config
	I0308 02:57:11.438069       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [03cca747b404f664d464e11c604340e40868268ef92f93b0c88d3d6317f59046] <==
	W0308 02:56:50.573671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 02:56:50.573746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 02:56:50.574657       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 02:56:50.574696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 02:56:51.403322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 02:56:51.403384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 02:56:51.443417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 02:56:51.443615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 02:56:51.464090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 02:56:51.464115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 02:56:51.551184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 02:56:51.551385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 02:56:51.601767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 02:56:51.601904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 02:56:51.618433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 02:56:51.618481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 02:56:51.694155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 02:56:51.694287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 02:56:51.803508       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 02:56:51.803563       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 02:56:51.828665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 02:56:51.828724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 02:56:51.835419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 02:56:51.835468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0308 02:56:54.348593       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 03:01:27 addons-963897 kubelet[1275]: I0308 03:01:27.893322    1275 memory_manager.go:346] "RemoveStaleState removing state" podUID="f813ac78-0e4f-4a63-9a2f-b2a384a116d6" containerName="csi-external-health-monitor-controller"
	Mar 08 03:01:27 addons-963897 kubelet[1275]: I0308 03:01:27.893330    1275 memory_manager.go:346] "RemoveStaleState removing state" podUID="f813ac78-0e4f-4a63-9a2f-b2a384a116d6" containerName="csi-snapshotter"
	Mar 08 03:01:27 addons-963897 kubelet[1275]: I0308 03:01:27.938405    1275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7g9t\" (UniqueName: \"kubernetes.io/projected/fc4a5fd6-b444-439b-8c65-26c5f9edd8cb-kube-api-access-r7g9t\") pod \"hello-world-app-5d77478584-9rvd2\" (UID: \"fc4a5fd6-b444-439b-8c65-26c5f9edd8cb\") " pod="default/hello-world-app-5d77478584-9rvd2"
	Mar 08 03:01:27 addons-963897 kubelet[1275]: I0308 03:01:27.938505    1275 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc4a5fd6-b444-439b-8c65-26c5f9edd8cb-gcp-creds\") pod \"hello-world-app-5d77478584-9rvd2\" (UID: \"fc4a5fd6-b444-439b-8c65-26c5f9edd8cb\") " pod="default/hello-world-app-5d77478584-9rvd2"
	Mar 08 03:01:29 addons-963897 kubelet[1275]: I0308 03:01:29.350961    1275 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvlw\" (UniqueName: \"kubernetes.io/projected/f3524dd5-1994-4650-80c6-d18fef44db57-kube-api-access-svvlw\") pod \"f3524dd5-1994-4650-80c6-d18fef44db57\" (UID: \"f3524dd5-1994-4650-80c6-d18fef44db57\") "
	Mar 08 03:01:29 addons-963897 kubelet[1275]: I0308 03:01:29.373654    1275 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3524dd5-1994-4650-80c6-d18fef44db57-kube-api-access-svvlw" (OuterVolumeSpecName: "kube-api-access-svvlw") pod "f3524dd5-1994-4650-80c6-d18fef44db57" (UID: "f3524dd5-1994-4650-80c6-d18fef44db57"). InnerVolumeSpecName "kube-api-access-svvlw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:01:29 addons-963897 kubelet[1275]: I0308 03:01:29.451284    1275 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-svvlw\" (UniqueName: \"kubernetes.io/projected/f3524dd5-1994-4650-80c6-d18fef44db57-kube-api-access-svvlw\") on node \"addons-963897\" DevicePath \"\""
	Mar 08 03:01:29 addons-963897 kubelet[1275]: I0308 03:01:29.967322    1275 scope.go:117] "RemoveContainer" containerID="06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6"
	Mar 08 03:01:30 addons-963897 kubelet[1275]: I0308 03:01:30.024433    1275 scope.go:117] "RemoveContainer" containerID="06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6"
	Mar 08 03:01:30 addons-963897 kubelet[1275]: E0308 03:01:30.026031    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6\": container with ID starting with 06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6 not found: ID does not exist" containerID="06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6"
	Mar 08 03:01:30 addons-963897 kubelet[1275]: I0308 03:01:30.026106    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6"} err="failed to get container status \"06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6\": rpc error: code = NotFound desc = could not find container \"06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6\": container with ID starting with 06efa7896e40378b987905bfe0c38d508bb1623cbda550cd46454e4b2b7ec1c6 not found: ID does not exist"
	Mar 08 03:01:30 addons-963897 kubelet[1275]: I0308 03:01:30.099219    1275 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f3524dd5-1994-4650-80c6-d18fef44db57" path="/var/lib/kubelet/pods/f3524dd5-1994-4650-80c6-d18fef44db57/volumes"
	Mar 08 03:01:32 addons-963897 kubelet[1275]: I0308 03:01:32.098657    1275 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4d7f30e9-ae20-4a20-bf40-d87a6869e74c" path="/var/lib/kubelet/pods/4d7f30e9-ae20-4a20-bf40-d87a6869e74c/volumes"
	Mar 08 03:01:32 addons-963897 kubelet[1275]: I0308 03:01:32.099263    1275 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dca58fdd-2331-4eec-bd17-368512009190" path="/var/lib/kubelet/pods/dca58fdd-2331-4eec-bd17-368512009190/volumes"
	Mar 08 03:01:33 addons-963897 kubelet[1275]: I0308 03:01:33.991220    1275 scope.go:117] "RemoveContainer" containerID="310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05"
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.006441    1275 scope.go:117] "RemoveContainer" containerID="310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05"
	Mar 08 03:01:34 addons-963897 kubelet[1275]: E0308 03:01:34.007097    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05\": container with ID starting with 310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05 not found: ID does not exist" containerID="310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05"
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.007159    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05"} err="failed to get container status \"310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05\": rpc error: code = NotFound desc = could not find container \"310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05\": container with ID starting with 310faa99477a48d873695ab80709b21d7896db27ce2232a2edefda673ed4ee05 not found: ID does not exist"
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.097212    1275 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s2gj\" (UniqueName: \"kubernetes.io/projected/f15de47e-19a7-4b5f-9467-079cc53a5bdf-kube-api-access-6s2gj\") pod \"f15de47e-19a7-4b5f-9467-079cc53a5bdf\" (UID: \"f15de47e-19a7-4b5f-9467-079cc53a5bdf\") "
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.097246    1275 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f15de47e-19a7-4b5f-9467-079cc53a5bdf-webhook-cert\") pod \"f15de47e-19a7-4b5f-9467-079cc53a5bdf\" (UID: \"f15de47e-19a7-4b5f-9467-079cc53a5bdf\") "
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.103555    1275 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15de47e-19a7-4b5f-9467-079cc53a5bdf-kube-api-access-6s2gj" (OuterVolumeSpecName: "kube-api-access-6s2gj") pod "f15de47e-19a7-4b5f-9467-079cc53a5bdf" (UID: "f15de47e-19a7-4b5f-9467-079cc53a5bdf"). InnerVolumeSpecName "kube-api-access-6s2gj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.104049    1275 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15de47e-19a7-4b5f-9467-079cc53a5bdf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f15de47e-19a7-4b5f-9467-079cc53a5bdf" (UID: "f15de47e-19a7-4b5f-9467-079cc53a5bdf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.198240    1275 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6s2gj\" (UniqueName: \"kubernetes.io/projected/f15de47e-19a7-4b5f-9467-079cc53a5bdf-kube-api-access-6s2gj\") on node \"addons-963897\" DevicePath \"\""
	Mar 08 03:01:34 addons-963897 kubelet[1275]: I0308 03:01:34.198264    1275 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f15de47e-19a7-4b5f-9467-079cc53a5bdf-webhook-cert\") on node \"addons-963897\" DevicePath \"\""
	Mar 08 03:01:36 addons-963897 kubelet[1275]: I0308 03:01:36.099665    1275 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f15de47e-19a7-4b5f-9467-079cc53a5bdf" path="/var/lib/kubelet/pods/f15de47e-19a7-4b5f-9467-079cc53a5bdf/volumes"
	
	
	==> storage-provisioner [002b668bc7feb37268855db00c8271cb95481dac0e08d2352e63823d3631a30a] <==
	I0308 02:57:17.864457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 02:57:17.883872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 02:57:17.884024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 02:57:17.908233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 02:57:17.908348       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-963897_d5286de6-f87f-4de5-afaa-28c7a070c947!
	I0308 02:57:17.911203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a77bc857-c535-471c-a560-576aab244c69", APIVersion:"v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-963897_d5286de6-f87f-4de5-afaa-28c7a070c947 became leader
	I0308 02:57:18.010047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-963897_d5286de6-f87f-4de5-afaa-28c7a070c947!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-963897 -n addons-963897
helpers_test.go:261: (dbg) Run:  kubectl --context addons-963897 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-963897
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-963897: exit status 82 (2m0.503760464s)

                                                
                                                
-- stdout --
	* Stopping node "addons-963897"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-963897" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-963897
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-963897: exit status 11 (21.737058049s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-963897" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-963897
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-963897: exit status 11 (6.143036537s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-963897" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-963897
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-963897: exit status 11 (6.143683853s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-963897" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image save gcr.io/google-containers/addon-resizer:functional-576754 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 image save gcr.io/google-containers/addon-resizer:functional-576754 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.764605089s)
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0308 03:08:23.345130  927329 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:08:23.345252  927329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:23.345267  927329 out.go:304] Setting ErrFile to fd 2...
	I0308 03:08:23.345298  927329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:23.345501  927329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:08:23.346898  927329 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:08:23.347142  927329 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:08:23.347801  927329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:23.347859  927329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:23.363015  927329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
	I0308 03:08:23.363469  927329 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:23.364196  927329 main.go:141] libmachine: Using API Version  1
	I0308 03:08:23.364245  927329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:23.364651  927329 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:23.364912  927329 main.go:141] libmachine: (functional-576754) Calling .GetState
	I0308 03:08:23.367007  927329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:23.367061  927329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:23.381597  927329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0308 03:08:23.382043  927329 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:23.382580  927329 main.go:141] libmachine: Using API Version  1
	I0308 03:08:23.382617  927329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:23.382954  927329 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:23.383207  927329 main.go:141] libmachine: (functional-576754) Calling .DriverName
	I0308 03:08:23.383443  927329 ssh_runner.go:195] Run: systemctl --version
	I0308 03:08:23.383485  927329 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
	I0308 03:08:23.386545  927329 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
	I0308 03:08:23.386980  927329 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
	I0308 03:08:23.387007  927329 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
	I0308 03:08:23.387228  927329 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
	I0308 03:08:23.387440  927329 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
	I0308 03:08:23.387596  927329 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
	I0308 03:08:23.387750  927329 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
	I0308 03:08:23.504695  927329 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0308 03:08:23.504778  927329 cache_images.go:254] Failed to load cached images for profile functional-576754. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0308 03:08:23.504812  927329 cache_images.go:262] succeeded pushing to: 
	I0308 03:08:23.504822  927329 cache_images.go:263] failed pushing to: functional-576754
	I0308 03:08:23.504851  927329 main.go:141] libmachine: Making call to close driver server
	I0308 03:08:23.504866  927329 main.go:141] libmachine: (functional-576754) Calling .Close
	I0308 03:08:23.505166  927329 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:08:23.505187  927329 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:08:23.505184  927329 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
	I0308 03:08:23.505195  927329 main.go:141] libmachine: Making call to close driver server
	I0308 03:08:23.505235  927329 main.go:141] libmachine: (functional-576754) Calling .Close
	I0308 03:08:23.505491  927329 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:08:23.505503  927329 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (142.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 node stop m02 -v=7 --alsologtostderr
E0308 03:13:32.256726  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:13:32.970569  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:13:59.943310  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:14:13.931667  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.499163266s)

                                                
                                                
-- stdout --
	* Stopping node "ha-576225-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:13:30.518537  931625 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:13:30.518684  931625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:13:30.518696  931625 out.go:304] Setting ErrFile to fd 2...
	I0308 03:13:30.518704  931625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:13:30.519387  931625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:13:30.519963  931625 mustload.go:65] Loading cluster: ha-576225
	I0308 03:13:30.520729  931625 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:13:30.520755  931625 stop.go:39] StopHost: ha-576225-m02
	I0308 03:13:30.521202  931625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:13:30.521264  931625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:13:30.537937  931625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0308 03:13:30.538444  931625 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:13:30.539105  931625 main.go:141] libmachine: Using API Version  1
	I0308 03:13:30.539132  931625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:13:30.539525  931625 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:13:30.541689  931625 out.go:177] * Stopping node "ha-576225-m02"  ...
	I0308 03:13:30.542818  931625 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 03:13:30.542863  931625 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:13:30.543072  931625 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 03:13:30.543097  931625 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:13:30.546002  931625 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:13:30.546437  931625 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:13:30.546472  931625 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:13:30.546679  931625 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:13:30.546880  931625 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:13:30.547032  931625 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:13:30.547429  931625 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:13:30.638464  931625 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 03:13:30.692508  931625 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 03:13:30.748947  931625 main.go:141] libmachine: Stopping "ha-576225-m02"...
	I0308 03:13:30.748975  931625 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:13:30.750610  931625 main.go:141] libmachine: (ha-576225-m02) Calling .Stop
	I0308 03:13:30.754396  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 0/120
	I0308 03:13:31.755738  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 1/120
	I0308 03:13:32.757989  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 2/120
	I0308 03:13:33.760262  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 3/120
	I0308 03:13:34.761888  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 4/120
	I0308 03:13:35.763709  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 5/120
	I0308 03:13:36.765226  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 6/120
	I0308 03:13:37.766641  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 7/120
	I0308 03:13:38.768033  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 8/120
	I0308 03:13:39.769206  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 9/120
	I0308 03:13:40.771414  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 10/120
	I0308 03:13:41.773088  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 11/120
	I0308 03:13:42.775152  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 12/120
	I0308 03:13:43.776641  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 13/120
	I0308 03:13:44.777996  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 14/120
	I0308 03:13:45.780029  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 15/120
	I0308 03:13:46.782458  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 16/120
	I0308 03:13:47.784290  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 17/120
	I0308 03:13:48.785734  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 18/120
	I0308 03:13:49.787883  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 19/120
	I0308 03:13:50.790114  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 20/120
	I0308 03:13:51.791652  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 21/120
	I0308 03:13:52.793049  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 22/120
	I0308 03:13:53.794366  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 23/120
	I0308 03:13:54.795632  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 24/120
	I0308 03:13:55.797936  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 25/120
	I0308 03:13:56.799821  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 26/120
	I0308 03:13:57.801162  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 27/120
	I0308 03:13:58.802909  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 28/120
	I0308 03:13:59.804747  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 29/120
	I0308 03:14:00.806881  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 30/120
	I0308 03:14:01.808129  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 31/120
	I0308 03:14:02.809704  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 32/120
	I0308 03:14:03.811747  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 33/120
	I0308 03:14:04.813206  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 34/120
	I0308 03:14:05.815185  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 35/120
	I0308 03:14:06.816922  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 36/120
	I0308 03:14:07.818597  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 37/120
	I0308 03:14:08.820110  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 38/120
	I0308 03:14:09.821493  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 39/120
	I0308 03:14:10.823633  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 40/120
	I0308 03:14:11.824977  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 41/120
	I0308 03:14:12.826524  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 42/120
	I0308 03:14:13.828227  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 43/120
	I0308 03:14:14.829660  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 44/120
	I0308 03:14:15.831321  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 45/120
	I0308 03:14:16.832712  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 46/120
	I0308 03:14:17.834282  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 47/120
	I0308 03:14:18.836075  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 48/120
	I0308 03:14:19.837511  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 49/120
	I0308 03:14:20.839178  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 50/120
	I0308 03:14:21.840806  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 51/120
	I0308 03:14:22.842125  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 52/120
	I0308 03:14:23.843593  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 53/120
	I0308 03:14:24.845405  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 54/120
	I0308 03:14:25.847293  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 55/120
	I0308 03:14:26.848736  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 56/120
	I0308 03:14:27.850114  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 57/120
	I0308 03:14:28.851564  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 58/120
	I0308 03:14:29.853342  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 59/120
	I0308 03:14:30.855485  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 60/120
	I0308 03:14:31.858621  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 61/120
	I0308 03:14:32.860127  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 62/120
	I0308 03:14:33.861615  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 63/120
	I0308 03:14:34.862991  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 64/120
	I0308 03:14:35.865155  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 65/120
	I0308 03:14:36.866469  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 66/120
	I0308 03:14:37.867953  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 67/120
	I0308 03:14:38.869421  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 68/120
	I0308 03:14:39.870769  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 69/120
	I0308 03:14:40.872944  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 70/120
	I0308 03:14:41.874347  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 71/120
	I0308 03:14:42.875863  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 72/120
	I0308 03:14:43.877373  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 73/120
	I0308 03:14:44.878951  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 74/120
	I0308 03:14:45.880877  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 75/120
	I0308 03:14:46.882696  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 76/120
	I0308 03:14:47.884755  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 77/120
	I0308 03:14:48.886246  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 78/120
	I0308 03:14:49.888364  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 79/120
	I0308 03:14:50.890477  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 80/120
	I0308 03:14:51.892243  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 81/120
	I0308 03:14:52.893708  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 82/120
	I0308 03:14:53.894920  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 83/120
	I0308 03:14:54.896529  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 84/120
	I0308 03:14:55.898572  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 85/120
	I0308 03:14:56.900938  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 86/120
	I0308 03:14:57.902956  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 87/120
	I0308 03:14:58.904820  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 88/120
	I0308 03:14:59.906082  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 89/120
	I0308 03:15:00.908424  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 90/120
	I0308 03:15:01.910505  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 91/120
	I0308 03:15:02.912057  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 92/120
	I0308 03:15:03.913359  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 93/120
	I0308 03:15:04.914875  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 94/120
	I0308 03:15:05.917086  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 95/120
	I0308 03:15:06.918447  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 96/120
	I0308 03:15:07.919974  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 97/120
	I0308 03:15:08.921504  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 98/120
	I0308 03:15:09.923683  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 99/120
	I0308 03:15:10.926241  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 100/120
	I0308 03:15:11.927590  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 101/120
	I0308 03:15:12.929048  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 102/120
	I0308 03:15:13.930386  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 103/120
	I0308 03:15:14.932586  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 104/120
	I0308 03:15:15.934760  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 105/120
	I0308 03:15:16.936001  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 106/120
	I0308 03:15:17.937379  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 107/120
	I0308 03:15:18.938530  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 108/120
	I0308 03:15:19.939890  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 109/120
	I0308 03:15:20.942344  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 110/120
	I0308 03:15:21.943610  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 111/120
	I0308 03:15:22.945093  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 112/120
	I0308 03:15:23.946469  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 113/120
	I0308 03:15:24.948428  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 114/120
	I0308 03:15:25.950382  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 115/120
	I0308 03:15:26.951812  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 116/120
	I0308 03:15:27.953244  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 117/120
	I0308 03:15:28.954722  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 118/120
	I0308 03:15:29.956064  931625 main.go:141] libmachine: (ha-576225-m02) Waiting for machine to stop 119/120
	I0308 03:15:30.956840  931625 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 03:15:30.957005  931625 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-576225 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
E0308 03:15:35.852226  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (19.10648271s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:15:31.021103  931952 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:15:31.021254  931952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:31.021268  931952 out.go:304] Setting ErrFile to fd 2...
	I0308 03:15:31.021289  931952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:31.021491  931952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:15:31.021679  931952 out.go:298] Setting JSON to false
	I0308 03:15:31.021746  931952 mustload.go:65] Loading cluster: ha-576225
	I0308 03:15:31.021832  931952 notify.go:220] Checking for updates...
	I0308 03:15:31.022265  931952 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:15:31.022289  931952 status.go:255] checking status of ha-576225 ...
	I0308 03:15:31.022781  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.022872  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.041402  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0308 03:15:31.041842  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.042543  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.042579  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.043031  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.043236  931952 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:15:31.044838  931952 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:15:31.044861  931952 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:15:31.045156  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.045192  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.059693  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0308 03:15:31.060098  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.060582  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.060609  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.060904  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.061086  931952 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:15:31.063758  931952 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:31.064244  931952 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:15:31.064281  931952 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:31.064394  931952 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:15:31.064676  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.064709  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.079325  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0308 03:15:31.079724  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.080161  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.080181  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.080579  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.080785  931952 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:15:31.080991  931952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:31.081038  931952 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:15:31.083730  931952 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:31.084303  931952 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:15:31.084347  931952 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:31.084506  931952 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:15:31.084715  931952 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:15:31.084878  931952 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:15:31.085016  931952 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:15:31.175778  931952 ssh_runner.go:195] Run: systemctl --version
	I0308 03:15:31.184298  931952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:31.203569  931952 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:15:31.203596  931952 api_server.go:166] Checking apiserver status ...
	I0308 03:15:31.203635  931952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:15:31.223030  931952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:15:31.233996  931952 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:15:31.234060  931952 ssh_runner.go:195] Run: ls
	I0308 03:15:31.239438  931952 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:15:31.244403  931952 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:15:31.244423  931952 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:15:31.244434  931952 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:31.244452  931952 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:15:31.244804  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.244853  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.260070  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0308 03:15:31.260533  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.261034  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.261058  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.261417  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.261650  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:15:31.263195  931952 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:15:31.263229  931952 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:15:31.263615  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.263659  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.278114  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0308 03:15:31.278504  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.278968  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.278992  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.279309  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.279511  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:15:31.282399  931952 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:31.282821  931952 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:15:31.282844  931952 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:31.282986  931952 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:15:31.283272  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:31.283304  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:31.298414  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0308 03:15:31.298786  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:31.299274  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:31.299297  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:31.299597  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:31.299812  931952 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:15:31.300015  931952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:31.300043  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:15:31.302884  931952 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:31.303295  931952 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:15:31.303323  931952 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:31.303479  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:15:31.303642  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:15:31.303814  931952 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:15:31.303976  931952 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:15:49.681501  931952 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:15:49.681610  931952 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:15:49.681642  931952 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:15:49.681652  931952 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:15:49.681672  931952 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:15:49.681679  931952 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:15:49.682004  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.682048  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.698224  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41255
	I0308 03:15:49.698775  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.699308  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.699334  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.699699  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.699921  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:15:49.701725  931952 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:15:49.701746  931952 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:15:49.702036  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.702089  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.717772  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0308 03:15:49.718221  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.718794  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.718816  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.719144  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.719354  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:15:49.722011  931952 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:49.722535  931952 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:15:49.722572  931952 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:49.722682  931952 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:15:49.722964  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.723007  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.738768  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0308 03:15:49.739193  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.739657  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.739682  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.740067  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.740304  931952 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:15:49.740519  931952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:49.740543  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:15:49.743244  931952 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:49.743650  931952 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:15:49.743675  931952 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:49.743858  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:15:49.744035  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:15:49.744201  931952 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:15:49.744318  931952 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:15:49.831366  931952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:49.853464  931952 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:15:49.853497  931952 api_server.go:166] Checking apiserver status ...
	I0308 03:15:49.853536  931952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:15:49.870941  931952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:15:49.884013  931952 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:15:49.884076  931952 ssh_runner.go:195] Run: ls
	I0308 03:15:49.889521  931952 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:15:49.894265  931952 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:15:49.894299  931952 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:15:49.894311  931952 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:49.894348  931952 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:15:49.894658  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.894703  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.910105  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0308 03:15:49.910578  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.911141  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.911177  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.911530  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.911765  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:15:49.913529  931952 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:15:49.913547  931952 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:15:49.913816  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.913849  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.928218  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0308 03:15:49.928582  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.928977  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.929000  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.929344  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.929529  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:15:49.932166  931952 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:49.932624  931952 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:15:49.932648  931952 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:49.932811  931952 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:15:49.933101  931952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:49.933135  931952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:49.947480  931952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0308 03:15:49.947913  931952 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:49.948450  931952 main.go:141] libmachine: Using API Version  1
	I0308 03:15:49.948481  931952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:49.948750  931952 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:49.948925  931952 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:15:49.949117  931952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:49.949151  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:15:49.951483  931952 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:49.951927  931952 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:15:49.951948  931952 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:49.952111  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:15:49.952268  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:15:49.952390  931952 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:15:49.952541  931952 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:15:50.043436  931952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:50.063037  931952 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-576225 -n ha-576225
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-576225 logs -n 25: (1.536298823s)
helpers_test.go:252: TestMutliControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m03_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m04 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp testdata/cp-test.txt                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m04_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03:/home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m03 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-576225 node stop m02 -v=7                                                     | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:08:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:08:40.294148  927850 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:08:40.294432  927850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:40.294442  927850 out.go:304] Setting ErrFile to fd 2...
	I0308 03:08:40.294446  927850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:40.294655  927850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:08:40.295228  927850 out.go:298] Setting JSON to false
	I0308 03:08:40.296765  927850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24646,"bootTime":1709842674,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:08:40.297169  927850 start.go:139] virtualization: kvm guest
	I0308 03:08:40.299379  927850 out.go:177] * [ha-576225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:08:40.300758  927850 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:08:40.300761  927850 notify.go:220] Checking for updates...
	I0308 03:08:40.302317  927850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:08:40.303647  927850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:08:40.304823  927850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.306071  927850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:08:40.307161  927850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:08:40.308668  927850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:08:40.342264  927850 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 03:08:40.343403  927850 start.go:297] selected driver: kvm2
	I0308 03:08:40.343420  927850 start.go:901] validating driver "kvm2" against <nil>
	I0308 03:08:40.343431  927850 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:08:40.344121  927850 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:08:40.344187  927850 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:08:40.358749  927850 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:08:40.358788  927850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 03:08:40.358971  927850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:08:40.359033  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:08:40.359045  927850 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0308 03:08:40.359052  927850 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 03:08:40.359094  927850 start.go:340] cluster config:
	{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0308 03:08:40.359180  927850 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:08:40.360860  927850 out.go:177] * Starting "ha-576225" primary control-plane node in "ha-576225" cluster
	I0308 03:08:40.362023  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:08:40.362051  927850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:08:40.362073  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:08:40.362157  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:08:40.362178  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:08:40.362468  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:08:40.362489  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json: {Name:mkd9a9e70b40bc7cf192b47a94c5105fab3566be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:08:40.362631  927850 start.go:360] acquireMachinesLock for ha-576225: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:08:40.362666  927850 start.go:364] duration metric: took 18.948µs to acquireMachinesLock for "ha-576225"
	I0308 03:08:40.362689  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:08:40.362746  927850 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 03:08:40.364354  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:08:40.364480  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:40.364528  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:40.377890  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0308 03:08:40.378281  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:40.378824  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:08:40.378847  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:40.379150  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:40.379348  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:08:40.379499  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:08:40.379652  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:08:40.379680  927850 client.go:168] LocalClient.Create starting
	I0308 03:08:40.379730  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:08:40.379773  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:08:40.379798  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:08:40.379867  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:08:40.379893  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:08:40.379914  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:08:40.379938  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:08:40.379951  927850 main.go:141] libmachine: (ha-576225) Calling .PreCreateCheck
	I0308 03:08:40.380245  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:08:40.380589  927850 main.go:141] libmachine: Creating machine...
	I0308 03:08:40.380602  927850 main.go:141] libmachine: (ha-576225) Calling .Create
	I0308 03:08:40.380732  927850 main.go:141] libmachine: (ha-576225) Creating KVM machine...
	I0308 03:08:40.381896  927850 main.go:141] libmachine: (ha-576225) DBG | found existing default KVM network
	I0308 03:08:40.382606  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.382480  927873 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0308 03:08:40.382640  927850 main.go:141] libmachine: (ha-576225) DBG | created network xml: 
	I0308 03:08:40.382661  927850 main.go:141] libmachine: (ha-576225) DBG | <network>
	I0308 03:08:40.382671  927850 main.go:141] libmachine: (ha-576225) DBG |   <name>mk-ha-576225</name>
	I0308 03:08:40.382690  927850 main.go:141] libmachine: (ha-576225) DBG |   <dns enable='no'/>
	I0308 03:08:40.382725  927850 main.go:141] libmachine: (ha-576225) DBG |   
	I0308 03:08:40.382751  927850 main.go:141] libmachine: (ha-576225) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 03:08:40.382824  927850 main.go:141] libmachine: (ha-576225) DBG |     <dhcp>
	I0308 03:08:40.382861  927850 main.go:141] libmachine: (ha-576225) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 03:08:40.382874  927850 main.go:141] libmachine: (ha-576225) DBG |     </dhcp>
	I0308 03:08:40.382885  927850 main.go:141] libmachine: (ha-576225) DBG |   </ip>
	I0308 03:08:40.382893  927850 main.go:141] libmachine: (ha-576225) DBG |   
	I0308 03:08:40.382900  927850 main.go:141] libmachine: (ha-576225) DBG | </network>
	I0308 03:08:40.382910  927850 main.go:141] libmachine: (ha-576225) DBG | 
	I0308 03:08:40.387482  927850 main.go:141] libmachine: (ha-576225) DBG | trying to create private KVM network mk-ha-576225 192.168.39.0/24...
	I0308 03:08:40.454041  927850 main.go:141] libmachine: (ha-576225) DBG | private KVM network mk-ha-576225 192.168.39.0/24 created
	I0308 03:08:40.454076  927850 main.go:141] libmachine: (ha-576225) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 ...
	I0308 03:08:40.454085  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.453973  927873 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.454100  927850 main.go:141] libmachine: (ha-576225) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:08:40.454239  927850 main.go:141] libmachine: (ha-576225) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:08:40.700284  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.700163  927873 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa...
	I0308 03:08:40.928145  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.928009  927873 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/ha-576225.rawdisk...
	I0308 03:08:40.928189  927850 main.go:141] libmachine: (ha-576225) DBG | Writing magic tar header
	I0308 03:08:40.928202  927850 main.go:141] libmachine: (ha-576225) DBG | Writing SSH key tar header
	I0308 03:08:40.928210  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.928128  927873 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 ...
	I0308 03:08:40.928225  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225
	I0308 03:08:40.928337  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:08:40.928367  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 (perms=drwx------)
	I0308 03:08:40.928379  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.928392  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:08:40.928401  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:08:40.928412  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:08:40.928420  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:08:40.928427  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home
	I0308 03:08:40.928431  927850 main.go:141] libmachine: (ha-576225) DBG | Skipping /home - not owner
	I0308 03:08:40.928444  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:08:40.928454  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:08:40.928480  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:08:40.928496  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:08:40.928504  927850 main.go:141] libmachine: (ha-576225) Creating domain...
	I0308 03:08:40.929548  927850 main.go:141] libmachine: (ha-576225) define libvirt domain using xml: 
	I0308 03:08:40.929574  927850 main.go:141] libmachine: (ha-576225) <domain type='kvm'>
	I0308 03:08:40.929582  927850 main.go:141] libmachine: (ha-576225)   <name>ha-576225</name>
	I0308 03:08:40.929587  927850 main.go:141] libmachine: (ha-576225)   <memory unit='MiB'>2200</memory>
	I0308 03:08:40.929591  927850 main.go:141] libmachine: (ha-576225)   <vcpu>2</vcpu>
	I0308 03:08:40.929596  927850 main.go:141] libmachine: (ha-576225)   <features>
	I0308 03:08:40.929601  927850 main.go:141] libmachine: (ha-576225)     <acpi/>
	I0308 03:08:40.929604  927850 main.go:141] libmachine: (ha-576225)     <apic/>
	I0308 03:08:40.929611  927850 main.go:141] libmachine: (ha-576225)     <pae/>
	I0308 03:08:40.929631  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.929643  927850 main.go:141] libmachine: (ha-576225)   </features>
	I0308 03:08:40.929653  927850 main.go:141] libmachine: (ha-576225)   <cpu mode='host-passthrough'>
	I0308 03:08:40.929660  927850 main.go:141] libmachine: (ha-576225)   
	I0308 03:08:40.929668  927850 main.go:141] libmachine: (ha-576225)   </cpu>
	I0308 03:08:40.929672  927850 main.go:141] libmachine: (ha-576225)   <os>
	I0308 03:08:40.929677  927850 main.go:141] libmachine: (ha-576225)     <type>hvm</type>
	I0308 03:08:40.929693  927850 main.go:141] libmachine: (ha-576225)     <boot dev='cdrom'/>
	I0308 03:08:40.929702  927850 main.go:141] libmachine: (ha-576225)     <boot dev='hd'/>
	I0308 03:08:40.929706  927850 main.go:141] libmachine: (ha-576225)     <bootmenu enable='no'/>
	I0308 03:08:40.929710  927850 main.go:141] libmachine: (ha-576225)   </os>
	I0308 03:08:40.929714  927850 main.go:141] libmachine: (ha-576225)   <devices>
	I0308 03:08:40.929741  927850 main.go:141] libmachine: (ha-576225)     <disk type='file' device='cdrom'>
	I0308 03:08:40.929761  927850 main.go:141] libmachine: (ha-576225)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/boot2docker.iso'/>
	I0308 03:08:40.929768  927850 main.go:141] libmachine: (ha-576225)       <target dev='hdc' bus='scsi'/>
	I0308 03:08:40.929775  927850 main.go:141] libmachine: (ha-576225)       <readonly/>
	I0308 03:08:40.929780  927850 main.go:141] libmachine: (ha-576225)     </disk>
	I0308 03:08:40.929792  927850 main.go:141] libmachine: (ha-576225)     <disk type='file' device='disk'>
	I0308 03:08:40.929826  927850 main.go:141] libmachine: (ha-576225)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:08:40.929845  927850 main.go:141] libmachine: (ha-576225)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/ha-576225.rawdisk'/>
	I0308 03:08:40.929857  927850 main.go:141] libmachine: (ha-576225)       <target dev='hda' bus='virtio'/>
	I0308 03:08:40.929871  927850 main.go:141] libmachine: (ha-576225)     </disk>
	I0308 03:08:40.929881  927850 main.go:141] libmachine: (ha-576225)     <interface type='network'>
	I0308 03:08:40.929889  927850 main.go:141] libmachine: (ha-576225)       <source network='mk-ha-576225'/>
	I0308 03:08:40.929901  927850 main.go:141] libmachine: (ha-576225)       <model type='virtio'/>
	I0308 03:08:40.929913  927850 main.go:141] libmachine: (ha-576225)     </interface>
	I0308 03:08:40.929925  927850 main.go:141] libmachine: (ha-576225)     <interface type='network'>
	I0308 03:08:40.929936  927850 main.go:141] libmachine: (ha-576225)       <source network='default'/>
	I0308 03:08:40.929944  927850 main.go:141] libmachine: (ha-576225)       <model type='virtio'/>
	I0308 03:08:40.929954  927850 main.go:141] libmachine: (ha-576225)     </interface>
	I0308 03:08:40.929962  927850 main.go:141] libmachine: (ha-576225)     <serial type='pty'>
	I0308 03:08:40.929970  927850 main.go:141] libmachine: (ha-576225)       <target port='0'/>
	I0308 03:08:40.929976  927850 main.go:141] libmachine: (ha-576225)     </serial>
	I0308 03:08:40.929990  927850 main.go:141] libmachine: (ha-576225)     <console type='pty'>
	I0308 03:08:40.930003  927850 main.go:141] libmachine: (ha-576225)       <target type='serial' port='0'/>
	I0308 03:08:40.930011  927850 main.go:141] libmachine: (ha-576225)     </console>
	I0308 03:08:40.930022  927850 main.go:141] libmachine: (ha-576225)     <rng model='virtio'>
	I0308 03:08:40.930036  927850 main.go:141] libmachine: (ha-576225)       <backend model='random'>/dev/random</backend>
	I0308 03:08:40.930047  927850 main.go:141] libmachine: (ha-576225)     </rng>
	I0308 03:08:40.930061  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.930071  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.930075  927850 main.go:141] libmachine: (ha-576225)   </devices>
	I0308 03:08:40.930081  927850 main.go:141] libmachine: (ha-576225) </domain>
	I0308 03:08:40.930090  927850 main.go:141] libmachine: (ha-576225) 
	I0308 03:08:40.934388  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:7f:e5:ac in network default
	I0308 03:08:40.934976  927850 main.go:141] libmachine: (ha-576225) Ensuring networks are active...
	I0308 03:08:40.934993  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:40.935640  927850 main.go:141] libmachine: (ha-576225) Ensuring network default is active
	I0308 03:08:40.935909  927850 main.go:141] libmachine: (ha-576225) Ensuring network mk-ha-576225 is active
	I0308 03:08:40.936425  927850 main.go:141] libmachine: (ha-576225) Getting domain xml...
	I0308 03:08:40.937159  927850 main.go:141] libmachine: (ha-576225) Creating domain...
	I0308 03:08:42.113366  927850 main.go:141] libmachine: (ha-576225) Waiting to get IP...
	I0308 03:08:42.114368  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.114679  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.114763  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.114666  927873 retry.go:31] will retry after 273.842922ms: waiting for machine to come up
	I0308 03:08:42.390230  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.390677  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.390714  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.390617  927873 retry.go:31] will retry after 316.670928ms: waiting for machine to come up
	I0308 03:08:42.709075  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.709424  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.709448  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.709379  927873 retry.go:31] will retry after 360.008598ms: waiting for machine to come up
	I0308 03:08:43.070902  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:43.071307  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:43.071332  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:43.071253  927873 retry.go:31] will retry after 431.037924ms: waiting for machine to come up
	I0308 03:08:43.503994  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:43.504574  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:43.504607  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:43.504519  927873 retry.go:31] will retry after 566.141074ms: waiting for machine to come up
	I0308 03:08:44.072116  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:44.072547  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:44.072581  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:44.072470  927873 retry.go:31] will retry after 662.467797ms: waiting for machine to come up
	I0308 03:08:44.736295  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:44.736750  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:44.736807  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:44.736685  927873 retry.go:31] will retry after 1.071646339s: waiting for machine to come up
	I0308 03:08:45.809584  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:45.810090  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:45.810128  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:45.810010  927873 retry.go:31] will retry after 996.004199ms: waiting for machine to come up
	I0308 03:08:46.807198  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:46.807630  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:46.807657  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:46.807574  927873 retry.go:31] will retry after 1.343148181s: waiting for machine to come up
	I0308 03:08:48.153244  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:48.153633  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:48.153682  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:48.153592  927873 retry.go:31] will retry after 1.632548305s: waiting for machine to come up
	I0308 03:08:49.788450  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:49.788776  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:49.788811  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:49.788717  927873 retry.go:31] will retry after 2.584580251s: waiting for machine to come up
	I0308 03:08:52.376260  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:52.376718  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:52.376749  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:52.376669  927873 retry.go:31] will retry after 3.267198369s: waiting for machine to come up
	I0308 03:08:55.645730  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:55.646110  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:55.646135  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:55.646065  927873 retry.go:31] will retry after 4.457669923s: waiting for machine to come up
	I0308 03:09:00.108584  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:00.108992  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:09:00.109043  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:09:00.108951  927873 retry.go:31] will retry after 5.593586188s: waiting for machine to come up
	I0308 03:09:05.704430  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.704928  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has current primary IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.704954  927850 main.go:141] libmachine: (ha-576225) Found IP for machine: 192.168.39.251
	I0308 03:09:05.704965  927850 main.go:141] libmachine: (ha-576225) Reserving static IP address...
	I0308 03:09:05.705313  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find host DHCP lease matching {name: "ha-576225", mac: "52:54:00:53:24:e8", ip: "192.168.39.251"} in network mk-ha-576225
	I0308 03:09:05.778257  927850 main.go:141] libmachine: (ha-576225) DBG | Getting to WaitForSSH function...
	I0308 03:09:05.778289  927850 main.go:141] libmachine: (ha-576225) Reserved static IP address: 192.168.39.251
	I0308 03:09:05.778303  927850 main.go:141] libmachine: (ha-576225) Waiting for SSH to be available...
	I0308 03:09:05.781259  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.781680  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:05.781715  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.781903  927850 main.go:141] libmachine: (ha-576225) DBG | Using SSH client type: external
	I0308 03:09:05.781925  927850 main.go:141] libmachine: (ha-576225) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa (-rw-------)
	I0308 03:09:05.781965  927850 main.go:141] libmachine: (ha-576225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:09:05.781978  927850 main.go:141] libmachine: (ha-576225) DBG | About to run SSH command:
	I0308 03:09:05.781994  927850 main.go:141] libmachine: (ha-576225) DBG | exit 0
	I0308 03:09:05.913476  927850 main.go:141] libmachine: (ha-576225) DBG | SSH cmd err, output: <nil>: 
	I0308 03:09:05.913820  927850 main.go:141] libmachine: (ha-576225) KVM machine creation complete!
	I0308 03:09:05.914184  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:09:05.914781  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:05.915015  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:05.915182  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:09:05.915198  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:05.916542  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:09:05.916558  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:09:05.916565  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:09:05.916570  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:05.918725  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.919080  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:05.919108  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.919339  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:05.919509  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:05.919656  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:05.919803  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:05.919972  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:05.920202  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:05.920223  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:09:06.032577  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:09:06.032609  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:09:06.032617  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.035477  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.035904  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.035932  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.036059  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.036262  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.036427  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.036610  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.036778  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.036941  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.036951  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:09:06.150337  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:09:06.150450  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:09:06.150462  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:09:06.150470  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.150745  927850 buildroot.go:166] provisioning hostname "ha-576225"
	I0308 03:09:06.150783  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.151063  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.153980  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.154342  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.154373  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.154531  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.154718  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.154852  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.155037  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.155156  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.155350  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.155365  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225 && echo "ha-576225" | sudo tee /etc/hostname
	I0308 03:09:06.287120  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:09:06.287159  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.289949  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.290422  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.290452  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.290700  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.290921  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.291146  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.291325  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.291531  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.291725  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.291742  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:09:06.418818  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:09:06.418849  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:09:06.418870  927850 buildroot.go:174] setting up certificates
	I0308 03:09:06.418881  927850 provision.go:84] configureAuth start
	I0308 03:09:06.418890  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.419232  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:06.422154  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.422513  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.422545  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.422700  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.424976  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.425269  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.425315  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.425534  927850 provision.go:143] copyHostCerts
	I0308 03:09:06.425569  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:09:06.425605  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:09:06.425617  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:09:06.425699  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:09:06.425812  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:09:06.425838  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:09:06.425848  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:09:06.425888  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:09:06.425965  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:09:06.425991  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:09:06.425997  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:09:06.426040  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:09:06.426124  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225 san=[127.0.0.1 192.168.39.251 ha-576225 localhost minikube]
	I0308 03:09:06.563215  927850 provision.go:177] copyRemoteCerts
	I0308 03:09:06.563277  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:09:06.563304  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.566083  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.566378  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.566417  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.566590  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.566787  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.566933  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.567064  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:06.657118  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:09:06.657192  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:09:06.683087  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:09:06.683142  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0308 03:09:06.711091  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:09:06.711162  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:09:06.738785  927850 provision.go:87] duration metric: took 319.889667ms to configureAuth
	I0308 03:09:06.738817  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:09:06.739048  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:06.739173  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.742419  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.742814  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.742840  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.743024  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.743222  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.743417  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.743594  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.743792  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.743974  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.743991  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:09:07.026099  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:09:07.026130  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:09:07.026141  927850 main.go:141] libmachine: (ha-576225) Calling .GetURL
	I0308 03:09:07.027584  927850 main.go:141] libmachine: (ha-576225) DBG | Using libvirt version 6000000
	I0308 03:09:07.029783  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.030120  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.030159  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.030282  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:09:07.030296  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:09:07.030304  927850 client.go:171] duration metric: took 26.650612846s to LocalClient.Create
	I0308 03:09:07.030326  927850 start.go:167] duration metric: took 26.650676556s to libmachine.API.Create "ha-576225"
	I0308 03:09:07.030337  927850 start.go:293] postStartSetup for "ha-576225" (driver="kvm2")
	I0308 03:09:07.030354  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:09:07.030378  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.030600  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:09:07.030631  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.032764  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.033037  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.033078  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.033184  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.033360  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.033518  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.033688  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.119876  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:09:07.124587  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:09:07.124611  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:09:07.124675  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:09:07.124763  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:09:07.124776  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:09:07.124895  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:09:07.134758  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:09:07.160668  927850 start.go:296] duration metric: took 130.315738ms for postStartSetup
	I0308 03:09:07.160722  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:09:07.161344  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:07.163693  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.164044  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.164065  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.164324  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:07.164531  927850 start.go:128] duration metric: took 26.801774502s to createHost
	I0308 03:09:07.164555  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.167892  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.168313  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.168335  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.168518  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.168730  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.168897  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.169056  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.169236  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:07.169442  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:07.169466  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:09:07.286593  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867347.261695886
	
	I0308 03:09:07.286625  927850 fix.go:216] guest clock: 1709867347.261695886
	I0308 03:09:07.286633  927850 fix.go:229] Guest: 2024-03-08 03:09:07.261695886 +0000 UTC Remote: 2024-03-08 03:09:07.164543538 +0000 UTC m=+26.917482463 (delta=97.152348ms)
	I0308 03:09:07.286669  927850 fix.go:200] guest clock delta is within tolerance: 97.152348ms
	I0308 03:09:07.286675  927850 start.go:83] releasing machines lock for "ha-576225", held for 26.923998397s
	I0308 03:09:07.286704  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.287018  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:07.289734  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.290099  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.290123  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.290326  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.290885  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.291082  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.291163  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:09:07.291225  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.291363  927850 ssh_runner.go:195] Run: cat /version.json
	I0308 03:09:07.291393  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.294052  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294114  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294424  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.294449  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294475  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.294523  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294623  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.294697  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.294798  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.294861  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.294935  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.294995  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.295059  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.295112  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.404478  927850 ssh_runner.go:195] Run: systemctl --version
	I0308 03:09:07.411098  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:09:07.575044  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:09:07.582025  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:09:07.582104  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:09:07.599648  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:09:07.599689  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:09:07.599763  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:09:07.623078  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:09:07.637158  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:09:07.637218  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:09:07.652360  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:09:07.666105  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:09:07.777782  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:09:07.914119  927850 docker.go:233] disabling docker service ...
	I0308 03:09:07.914214  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:09:07.930726  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:09:07.944752  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:09:08.080642  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:09:08.218262  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:09:08.233133  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:09:08.253229  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:09:08.253315  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.265163  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:09:08.265224  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.277025  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.288671  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.300359  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:09:08.312337  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:09:08.322998  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:09:08.323039  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:09:08.337192  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:09:08.347570  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:09:08.486444  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:09:08.623050  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:09:08.623156  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:09:08.628275  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:09:08.628333  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:09:08.632624  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:09:08.684740  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:09:08.684833  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:09:08.718558  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:09:08.749449  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:09:08.750921  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:08.753779  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:08.754143  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:08.754169  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:08.754452  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:09:08.758783  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:09:08.772800  927850 kubeadm.go:877] updating cluster {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:09:08.772943  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:09:08.773010  927850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:09:08.805268  927850 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 03:09:08.805415  927850 ssh_runner.go:195] Run: which lz4
	I0308 03:09:08.809582  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0308 03:09:08.809663  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 03:09:08.814188  927850 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 03:09:08.814214  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 03:09:10.606710  927850 crio.go:444] duration metric: took 1.797037668s to copy over tarball
	I0308 03:09:10.606818  927850 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 03:09:13.297404  927850 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690545035s)
	I0308 03:09:13.297442  927850 crio.go:451] duration metric: took 2.690686272s to extract the tarball
	I0308 03:09:13.297450  927850 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 03:09:13.340681  927850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:09:13.392353  927850 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:09:13.392382  927850 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:09:13.392391  927850 kubeadm.go:928] updating node { 192.168.39.251 8443 v1.28.4 crio true true} ...
	I0308 03:09:13.392510  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:09:13.392584  927850 ssh_runner.go:195] Run: crio config
	I0308 03:09:13.449179  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:09:13.449203  927850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 03:09:13.449217  927850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:09:13.449245  927850 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-576225 NodeName:ha-576225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:09:13.449418  927850 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-576225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:09:13.449448  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:09:13.449514  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:09:13.449565  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:09:13.460945  927850 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:09:13.461004  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0308 03:09:13.472431  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0308 03:09:13.491557  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:09:13.509663  927850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 03:09:13.527724  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:09:13.546318  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:09:13.550750  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:09:13.564788  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:09:13.699617  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:09:13.717939  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.251
	I0308 03:09:13.717972  927850 certs.go:194] generating shared ca certs ...
	I0308 03:09:13.717994  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.718219  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:09:13.718292  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:09:13.718305  927850 certs.go:256] generating profile certs ...
	I0308 03:09:13.718379  927850 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:09:13.718395  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt with IP's: []
	I0308 03:09:13.849139  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt ...
	I0308 03:09:13.849182  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt: {Name:mk32536b65761539df07da1a79a6b1b5b790cbd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.849411  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key ...
	I0308 03:09:13.849433  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key: {Name:mk3231ee4f1f222e55be930cee3f99c59eaa3a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.849565  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201
	I0308 03:09:13.849583  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.254]
	I0308 03:09:14.060754  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 ...
	I0308 03:09:14.060785  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201: {Name:mk0e299082370d42c4949bed72be11ba90c5e095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.060937  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201 ...
	I0308 03:09:14.060951  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201: {Name:mka44f34e7228ac2eee6a53ccb590b8ee666530d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.061019  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:09:14.061123  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:09:14.061190  927850 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:09:14.061205  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt with IP's: []
	I0308 03:09:14.216138  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt ...
	I0308 03:09:14.216175  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt: {Name:mk14538e3305db9cae733a63ff4ec9b8eb2791bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.216337  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key ...
	I0308 03:09:14.216348  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key: {Name:mk28be81ffe2f6fa87b5f077620b9fe69a4c031e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.216415  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:09:14.216432  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:09:14.216445  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:09:14.216459  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:09:14.216477  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:09:14.216490  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:09:14.216503  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:09:14.216515  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:09:14.216565  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:09:14.216614  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:09:14.216631  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:09:14.216659  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:09:14.216681  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:09:14.216701  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:09:14.216736  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:09:14.216766  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.216779  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.216791  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.217491  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:09:14.245691  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:09:14.273248  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:09:14.298581  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:09:14.326607  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 03:09:14.354862  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:09:14.380624  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:09:14.406394  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:09:14.432387  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:09:14.458966  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:09:14.495459  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:09:14.533510  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:09:14.551518  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:09:14.558017  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:09:14.570545  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.575977  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.576029  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.584707  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:09:14.599663  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:09:14.613198  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.618600  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.618665  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.625256  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:09:14.638153  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:09:14.650876  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.655776  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.655830  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.661980  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:09:14.674420  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:09:14.679001  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:09:14.679067  927850 kubeadm.go:391] StartCluster: {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:09:14.679181  927850 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:09:14.679258  927850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:09:14.722289  927850 cri.go:89] found id: ""
	I0308 03:09:14.722393  927850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 03:09:14.734302  927850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 03:09:14.745847  927850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 03:09:14.757140  927850 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 03:09:14.757155  927850 kubeadm.go:156] found existing configuration files:
	
	I0308 03:09:14.757196  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 03:09:14.768257  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 03:09:14.768321  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 03:09:14.779640  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 03:09:14.790416  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 03:09:14.790480  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 03:09:14.801216  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 03:09:14.811327  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 03:09:14.811378  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 03:09:14.822063  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 03:09:14.834338  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 03:09:14.834392  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 03:09:14.846562  927850 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 03:09:15.095206  927850 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 03:09:28.955594  927850 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 03:09:28.955648  927850 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 03:09:28.955761  927850 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 03:09:28.955923  927850 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 03:09:28.956098  927850 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 03:09:28.956183  927850 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 03:09:28.957604  927850 out.go:204]   - Generating certificates and keys ...
	I0308 03:09:28.957708  927850 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 03:09:28.957821  927850 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 03:09:28.957939  927850 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 03:09:28.958041  927850 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 03:09:28.958167  927850 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 03:09:28.958269  927850 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 03:09:28.958375  927850 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 03:09:28.958483  927850 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-576225 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0308 03:09:28.958536  927850 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 03:09:28.958677  927850 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-576225 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0308 03:09:28.958746  927850 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 03:09:28.958810  927850 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 03:09:28.958865  927850 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 03:09:28.958957  927850 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 03:09:28.959020  927850 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 03:09:28.959068  927850 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 03:09:28.959163  927850 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 03:09:28.959249  927850 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 03:09:28.959353  927850 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 03:09:28.959443  927850 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 03:09:28.961878  927850 out.go:204]   - Booting up control plane ...
	I0308 03:09:28.961998  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 03:09:28.962109  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 03:09:28.962198  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 03:09:28.962341  927850 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 03:09:28.962454  927850 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 03:09:28.962508  927850 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 03:09:28.962714  927850 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 03:09:28.962846  927850 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.601132 seconds
	I0308 03:09:28.962996  927850 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 03:09:28.963183  927850 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 03:09:28.963274  927850 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 03:09:28.963497  927850 kubeadm.go:309] [mark-control-plane] Marking the node ha-576225 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 03:09:28.963568  927850 kubeadm.go:309] [bootstrap-token] Using token: ewomow.x8ox8qe7q1ouzoq2
	I0308 03:09:28.964909  927850 out.go:204]   - Configuring RBAC rules ...
	I0308 03:09:28.965016  927850 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 03:09:28.965115  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 03:09:28.965270  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 03:09:28.965482  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 03:09:28.965642  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 03:09:28.965769  927850 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 03:09:28.965929  927850 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 03:09:28.965998  927850 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 03:09:28.966059  927850 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 03:09:28.966069  927850 kubeadm.go:309] 
	I0308 03:09:28.966158  927850 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 03:09:28.966171  927850 kubeadm.go:309] 
	I0308 03:09:28.966288  927850 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 03:09:28.966297  927850 kubeadm.go:309] 
	I0308 03:09:28.966331  927850 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 03:09:28.966408  927850 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 03:09:28.966484  927850 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 03:09:28.966495  927850 kubeadm.go:309] 
	I0308 03:09:28.966577  927850 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 03:09:28.966588  927850 kubeadm.go:309] 
	I0308 03:09:28.966679  927850 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 03:09:28.966689  927850 kubeadm.go:309] 
	I0308 03:09:28.966762  927850 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 03:09:28.966878  927850 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 03:09:28.966981  927850 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 03:09:28.966990  927850 kubeadm.go:309] 
	I0308 03:09:28.967104  927850 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 03:09:28.967217  927850 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 03:09:28.967228  927850 kubeadm.go:309] 
	I0308 03:09:28.967339  927850 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ewomow.x8ox8qe7q1ouzoq2 \
	I0308 03:09:28.967486  927850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 03:09:28.967522  927850 kubeadm.go:309] 	--control-plane 
	I0308 03:09:28.967532  927850 kubeadm.go:309] 
	I0308 03:09:28.967657  927850 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 03:09:28.967670  927850 kubeadm.go:309] 
	I0308 03:09:28.967780  927850 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ewomow.x8ox8qe7q1ouzoq2 \
	I0308 03:09:28.967927  927850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 03:09:28.967943  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:09:28.967954  927850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 03:09:28.969403  927850 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 03:09:28.970646  927850 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 03:09:29.010907  927850 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 03:09:29.010936  927850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 03:09:29.071668  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 03:09:30.064811  927850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 03:09:30.064892  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:30.065003  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225 minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=true
	I0308 03:09:30.091925  927850 ops.go:34] apiserver oom_adj: -16
	I0308 03:09:30.228715  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:30.728864  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:31.229363  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:31.729021  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:32.229171  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:32.728880  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:33.228830  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:33.728877  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:34.228892  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:34.729404  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:35.229030  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:35.729365  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:36.229126  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:36.728881  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:37.229163  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:37.729163  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:38.229518  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:38.337792  927850 kubeadm.go:1106] duration metric: took 8.272957989s to wait for elevateKubeSystemPrivileges
	W0308 03:09:38.337837  927850 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 03:09:38.337848  927850 kubeadm.go:393] duration metric: took 23.658793696s to StartCluster
	I0308 03:09:38.337884  927850 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:38.337996  927850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:09:38.338950  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:38.339160  927850 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:09:38.339182  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:09:38.339184  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 03:09:38.339198  927850 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 03:09:38.339241  927850 addons.go:69] Setting storage-provisioner=true in profile "ha-576225"
	I0308 03:09:38.339266  927850 addons.go:234] Setting addon storage-provisioner=true in "ha-576225"
	I0308 03:09:38.339287  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:09:38.339268  927850 addons.go:69] Setting default-storageclass=true in profile "ha-576225"
	I0308 03:09:38.339349  927850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-576225"
	I0308 03:09:38.339499  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:38.339719  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.339747  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.339758  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.339779  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.355154  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
	I0308 03:09:38.355620  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.356205  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.356254  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.356607  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.356849  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.359067  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:09:38.359428  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 03:09:38.359628  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0308 03:09:38.360013  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.360077  927850 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 03:09:38.360343  927850 addons.go:234] Setting addon default-storageclass=true in "ha-576225"
	I0308 03:09:38.360390  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:09:38.360482  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.360504  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.360774  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.360822  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.360883  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.361565  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.361616  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.375778  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0308 03:09:38.375972  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0308 03:09:38.376248  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.376399  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.376878  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.376896  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.376900  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.376915  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.377258  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.377328  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.377468  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.377880  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.377943  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.379267  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:38.381063  927850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:09:38.382412  927850 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:09:38.382435  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 03:09:38.382454  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:38.385750  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.386243  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:38.386275  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.386390  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:38.386563  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:38.386753  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:38.386903  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:38.394422  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0308 03:09:38.394804  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.395314  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.395346  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.395639  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.395797  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.397239  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:38.397469  927850 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 03:09:38.397486  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 03:09:38.397503  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:38.400081  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.400422  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:38.400442  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.400676  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:38.400860  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:38.401018  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:38.401171  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:38.567160  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 03:09:38.631601  927850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:09:38.633180  927850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 03:09:39.447931  927850 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0308 03:09:39.726952  927850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.093728831s)
	I0308 03:09:39.727025  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727039  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727104  927850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095460452s)
	I0308 03:09:39.727170  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727182  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727377  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727407  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727421  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727434  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727448  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727454  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727460  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727469  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727475  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727704  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727732  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727744  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727759  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727763  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727794  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727900  927850 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0308 03:09:39.727907  927850 round_trippers.go:469] Request Headers:
	I0308 03:09:39.727915  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:09:39.727917  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:09:39.765174  927850 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0308 03:09:39.766610  927850 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0308 03:09:39.766630  927850 round_trippers.go:469] Request Headers:
	I0308 03:09:39.766638  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:09:39.766642  927850 round_trippers.go:473]     Content-Type: application/json
	I0308 03:09:39.766644  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:09:39.777780  927850 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0308 03:09:39.778064  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.778094  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.778395  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.778470  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.778495  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.780185  927850 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 03:09:39.781498  927850 addons.go:505] duration metric: took 1.442301659s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 03:09:39.781543  927850 start.go:245] waiting for cluster config update ...
	I0308 03:09:39.781561  927850 start.go:254] writing updated cluster config ...
	I0308 03:09:39.783239  927850 out.go:177] 
	I0308 03:09:39.784625  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:39.784727  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:39.786321  927850 out.go:177] * Starting "ha-576225-m02" control-plane node in "ha-576225" cluster
	I0308 03:09:39.787575  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:09:39.787598  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:09:39.787690  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:09:39.787701  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:09:39.787764  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:39.787928  927850 start.go:360] acquireMachinesLock for ha-576225-m02: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:09:39.787970  927850 start.go:364] duration metric: took 23.713µs to acquireMachinesLock for "ha-576225-m02"
	I0308 03:09:39.787992  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:09:39.788057  927850 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0308 03:09:39.789998  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:09:39.790091  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:39.790127  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:39.806065  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0308 03:09:39.806664  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:39.807195  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:39.807228  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:39.807612  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:39.807835  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:09:39.807984  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:09:39.808166  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:09:39.808196  927850 client.go:168] LocalClient.Create starting
	I0308 03:09:39.808230  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:09:39.808279  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:09:39.808298  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:09:39.808373  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:09:39.808401  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:09:39.808417  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:09:39.808443  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:09:39.808455  927850 main.go:141] libmachine: (ha-576225-m02) Calling .PreCreateCheck
	I0308 03:09:39.808641  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:09:39.809039  927850 main.go:141] libmachine: Creating machine...
	I0308 03:09:39.809053  927850 main.go:141] libmachine: (ha-576225-m02) Calling .Create
	I0308 03:09:39.809191  927850 main.go:141] libmachine: (ha-576225-m02) Creating KVM machine...
	I0308 03:09:39.810615  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found existing default KVM network
	I0308 03:09:39.810719  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found existing private KVM network mk-ha-576225
	I0308 03:09:39.810857  927850 main.go:141] libmachine: (ha-576225-m02) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 ...
	I0308 03:09:39.810885  927850 main.go:141] libmachine: (ha-576225-m02) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:09:39.810964  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:39.810852  928212 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:09:39.811058  927850 main.go:141] libmachine: (ha-576225-m02) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:09:40.061605  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.061464  928212 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa...
	I0308 03:09:40.171537  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.171359  928212 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/ha-576225-m02.rawdisk...
	I0308 03:09:40.171597  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Writing magic tar header
	I0308 03:09:40.171615  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Writing SSH key tar header
	I0308 03:09:40.171639  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.171487  928212 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 ...
	I0308 03:09:40.171656  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 (perms=drwx------)
	I0308 03:09:40.171676  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:09:40.171690  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02
	I0308 03:09:40.171712  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:09:40.171722  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:09:40.171769  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:09:40.171793  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:09:40.171805  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:09:40.171822  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:09:40.171836  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:09:40.171848  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:09:40.171881  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home
	I0308 03:09:40.171897  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Skipping /home - not owner
	I0308 03:09:40.171916  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:09:40.171927  927850 main.go:141] libmachine: (ha-576225-m02) Creating domain...
	I0308 03:09:40.172826  927850 main.go:141] libmachine: (ha-576225-m02) define libvirt domain using xml: 
	I0308 03:09:40.172849  927850 main.go:141] libmachine: (ha-576225-m02) <domain type='kvm'>
	I0308 03:09:40.172859  927850 main.go:141] libmachine: (ha-576225-m02)   <name>ha-576225-m02</name>
	I0308 03:09:40.172866  927850 main.go:141] libmachine: (ha-576225-m02)   <memory unit='MiB'>2200</memory>
	I0308 03:09:40.172875  927850 main.go:141] libmachine: (ha-576225-m02)   <vcpu>2</vcpu>
	I0308 03:09:40.172884  927850 main.go:141] libmachine: (ha-576225-m02)   <features>
	I0308 03:09:40.172889  927850 main.go:141] libmachine: (ha-576225-m02)     <acpi/>
	I0308 03:09:40.172894  927850 main.go:141] libmachine: (ha-576225-m02)     <apic/>
	I0308 03:09:40.172899  927850 main.go:141] libmachine: (ha-576225-m02)     <pae/>
	I0308 03:09:40.172905  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.172911  927850 main.go:141] libmachine: (ha-576225-m02)   </features>
	I0308 03:09:40.172918  927850 main.go:141] libmachine: (ha-576225-m02)   <cpu mode='host-passthrough'>
	I0308 03:09:40.172925  927850 main.go:141] libmachine: (ha-576225-m02)   
	I0308 03:09:40.172935  927850 main.go:141] libmachine: (ha-576225-m02)   </cpu>
	I0308 03:09:40.172959  927850 main.go:141] libmachine: (ha-576225-m02)   <os>
	I0308 03:09:40.172977  927850 main.go:141] libmachine: (ha-576225-m02)     <type>hvm</type>
	I0308 03:09:40.172983  927850 main.go:141] libmachine: (ha-576225-m02)     <boot dev='cdrom'/>
	I0308 03:09:40.172990  927850 main.go:141] libmachine: (ha-576225-m02)     <boot dev='hd'/>
	I0308 03:09:40.173025  927850 main.go:141] libmachine: (ha-576225-m02)     <bootmenu enable='no'/>
	I0308 03:09:40.173047  927850 main.go:141] libmachine: (ha-576225-m02)   </os>
	I0308 03:09:40.173058  927850 main.go:141] libmachine: (ha-576225-m02)   <devices>
	I0308 03:09:40.173072  927850 main.go:141] libmachine: (ha-576225-m02)     <disk type='file' device='cdrom'>
	I0308 03:09:40.173092  927850 main.go:141] libmachine: (ha-576225-m02)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/boot2docker.iso'/>
	I0308 03:09:40.173125  927850 main.go:141] libmachine: (ha-576225-m02)       <target dev='hdc' bus='scsi'/>
	I0308 03:09:40.173139  927850 main.go:141] libmachine: (ha-576225-m02)       <readonly/>
	I0308 03:09:40.173151  927850 main.go:141] libmachine: (ha-576225-m02)     </disk>
	I0308 03:09:40.173166  927850 main.go:141] libmachine: (ha-576225-m02)     <disk type='file' device='disk'>
	I0308 03:09:40.173207  927850 main.go:141] libmachine: (ha-576225-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:09:40.173226  927850 main.go:141] libmachine: (ha-576225-m02)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/ha-576225-m02.rawdisk'/>
	I0308 03:09:40.173239  927850 main.go:141] libmachine: (ha-576225-m02)       <target dev='hda' bus='virtio'/>
	I0308 03:09:40.173253  927850 main.go:141] libmachine: (ha-576225-m02)     </disk>
	I0308 03:09:40.173264  927850 main.go:141] libmachine: (ha-576225-m02)     <interface type='network'>
	I0308 03:09:40.173313  927850 main.go:141] libmachine: (ha-576225-m02)       <source network='mk-ha-576225'/>
	I0308 03:09:40.173339  927850 main.go:141] libmachine: (ha-576225-m02)       <model type='virtio'/>
	I0308 03:09:40.173351  927850 main.go:141] libmachine: (ha-576225-m02)     </interface>
	I0308 03:09:40.173370  927850 main.go:141] libmachine: (ha-576225-m02)     <interface type='network'>
	I0308 03:09:40.173408  927850 main.go:141] libmachine: (ha-576225-m02)       <source network='default'/>
	I0308 03:09:40.173432  927850 main.go:141] libmachine: (ha-576225-m02)       <model type='virtio'/>
	I0308 03:09:40.173455  927850 main.go:141] libmachine: (ha-576225-m02)     </interface>
	I0308 03:09:40.173475  927850 main.go:141] libmachine: (ha-576225-m02)     <serial type='pty'>
	I0308 03:09:40.173489  927850 main.go:141] libmachine: (ha-576225-m02)       <target port='0'/>
	I0308 03:09:40.173514  927850 main.go:141] libmachine: (ha-576225-m02)     </serial>
	I0308 03:09:40.173528  927850 main.go:141] libmachine: (ha-576225-m02)     <console type='pty'>
	I0308 03:09:40.173541  927850 main.go:141] libmachine: (ha-576225-m02)       <target type='serial' port='0'/>
	I0308 03:09:40.173554  927850 main.go:141] libmachine: (ha-576225-m02)     </console>
	I0308 03:09:40.173566  927850 main.go:141] libmachine: (ha-576225-m02)     <rng model='virtio'>
	I0308 03:09:40.173590  927850 main.go:141] libmachine: (ha-576225-m02)       <backend model='random'>/dev/random</backend>
	I0308 03:09:40.173603  927850 main.go:141] libmachine: (ha-576225-m02)     </rng>
	I0308 03:09:40.173614  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.173627  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.173639  927850 main.go:141] libmachine: (ha-576225-m02)   </devices>
	I0308 03:09:40.173648  927850 main.go:141] libmachine: (ha-576225-m02) </domain>
	I0308 03:09:40.173659  927850 main.go:141] libmachine: (ha-576225-m02) 
	I0308 03:09:40.180658  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:25:bc:c5 in network default
	I0308 03:09:40.181358  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:40.181381  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring networks are active...
	I0308 03:09:40.182217  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring network default is active
	I0308 03:09:40.182609  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring network mk-ha-576225 is active
	I0308 03:09:40.183053  927850 main.go:141] libmachine: (ha-576225-m02) Getting domain xml...
	I0308 03:09:40.183845  927850 main.go:141] libmachine: (ha-576225-m02) Creating domain...
	I0308 03:09:41.409071  927850 main.go:141] libmachine: (ha-576225-m02) Waiting to get IP...
	I0308 03:09:41.409950  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.410393  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.410425  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.410355  928212 retry.go:31] will retry after 236.493239ms: waiting for machine to come up
	I0308 03:09:41.648854  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.649310  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.649343  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.649236  928212 retry.go:31] will retry after 290.945002ms: waiting for machine to come up
	I0308 03:09:41.942049  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.942535  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.942574  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.942496  928212 retry.go:31] will retry after 446.637822ms: waiting for machine to come up
	I0308 03:09:42.391146  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:42.391602  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:42.391627  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:42.391553  928212 retry.go:31] will retry after 591.707727ms: waiting for machine to come up
	I0308 03:09:42.985370  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:42.985882  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:42.985918  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:42.985844  928212 retry.go:31] will retry after 572.398923ms: waiting for machine to come up
	I0308 03:09:43.559842  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:43.560465  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:43.560497  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:43.560418  928212 retry.go:31] will retry after 911.298328ms: waiting for machine to come up
	I0308 03:09:44.473019  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:44.473513  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:44.473546  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:44.473459  928212 retry.go:31] will retry after 1.130415745s: waiting for machine to come up
	I0308 03:09:45.605086  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:45.605606  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:45.605637  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:45.605561  928212 retry.go:31] will retry after 1.216381839s: waiting for machine to come up
	I0308 03:09:46.823962  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:46.824386  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:46.824428  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:46.824318  928212 retry.go:31] will retry after 1.299774618s: waiting for machine to come up
	I0308 03:09:48.125805  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:48.126236  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:48.126266  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:48.126175  928212 retry.go:31] will retry after 1.805876059s: waiting for machine to come up
	I0308 03:09:49.934160  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:49.934637  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:49.934669  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:49.934542  928212 retry.go:31] will retry after 2.221353292s: waiting for machine to come up
	I0308 03:09:52.158940  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:52.159290  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:52.159346  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:52.159227  928212 retry.go:31] will retry after 2.485920219s: waiting for machine to come up
	I0308 03:09:54.646384  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:54.646823  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:54.646852  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:54.646744  928212 retry.go:31] will retry after 3.903605035s: waiting for machine to come up
	I0308 03:09:58.556071  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:58.557077  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:58.557102  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:58.557039  928212 retry.go:31] will retry after 5.168694212s: waiting for machine to come up
	I0308 03:10:03.730530  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.731097  927850 main.go:141] libmachine: (ha-576225-m02) Found IP for machine: 192.168.39.128
	I0308 03:10:03.731124  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has current primary IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.731130  927850 main.go:141] libmachine: (ha-576225-m02) Reserving static IP address...
	I0308 03:10:03.731442  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find host DHCP lease matching {name: "ha-576225-m02", mac: "52:54:00:13:93:a0", ip: "192.168.39.128"} in network mk-ha-576225
	I0308 03:10:03.807303  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Getting to WaitForSSH function...
	I0308 03:10:03.807354  927850 main.go:141] libmachine: (ha-576225-m02) Reserved static IP address: 192.168.39.128
	I0308 03:10:03.807369  927850 main.go:141] libmachine: (ha-576225-m02) Waiting for SSH to be available...
	I0308 03:10:03.810205  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.810645  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:03.810681  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.810866  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using SSH client type: external
	I0308 03:10:03.810897  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa (-rw-------)
	I0308 03:10:03.810924  927850 main.go:141] libmachine: (ha-576225-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:10:03.810954  927850 main.go:141] libmachine: (ha-576225-m02) DBG | About to run SSH command:
	I0308 03:10:03.810972  927850 main.go:141] libmachine: (ha-576225-m02) DBG | exit 0
	I0308 03:10:03.937399  927850 main.go:141] libmachine: (ha-576225-m02) DBG | SSH cmd err, output: <nil>: 
	I0308 03:10:03.937692  927850 main.go:141] libmachine: (ha-576225-m02) KVM machine creation complete!
	I0308 03:10:03.937985  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:10:03.938616  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:03.938909  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:03.939102  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:10:03.939127  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:10:03.940444  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:10:03.940459  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:10:03.940467  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:10:03.940475  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:03.942977  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.943381  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:03.943410  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.943544  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:03.943773  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:03.943971  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:03.944088  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:03.944227  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:03.944518  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:03.944533  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:10:04.056763  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:10:04.056804  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:10:04.056816  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.059539  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.060000  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.060026  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.060220  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.060431  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.060639  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.060833  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.060999  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.061198  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.061212  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:10:04.174334  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:10:04.174469  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:10:04.174485  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:10:04.174495  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.174802  927850 buildroot.go:166] provisioning hostname "ha-576225-m02"
	I0308 03:10:04.174839  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.175101  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.177796  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.178188  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.178212  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.178381  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.178577  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.178758  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.178882  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.179048  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.179269  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.179296  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225-m02 && echo "ha-576225-m02" | sudo tee /etc/hostname
	I0308 03:10:04.307645  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225-m02
	
	I0308 03:10:04.307681  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.310639  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.311037  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.311071  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.311239  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.311470  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.311621  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.311777  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.311947  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.312162  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.312185  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:10:04.432237  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:10:04.432280  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:10:04.432326  927850 buildroot.go:174] setting up certificates
	I0308 03:10:04.432350  927850 provision.go:84] configureAuth start
	I0308 03:10:04.432370  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.432682  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:04.435463  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.435946  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.435972  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.436126  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.438265  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.438534  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.438579  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.438663  927850 provision.go:143] copyHostCerts
	I0308 03:10:04.438698  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:10:04.438744  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:10:04.438782  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:10:04.438878  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:10:04.438984  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:10:04.439014  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:10:04.439024  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:10:04.439065  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:10:04.439202  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:10:04.439228  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:10:04.439239  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:10:04.439282  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:10:04.439368  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225-m02 san=[127.0.0.1 192.168.39.128 ha-576225-m02 localhost minikube]
	I0308 03:10:04.539888  927850 provision.go:177] copyRemoteCerts
	I0308 03:10:04.539965  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:10:04.540000  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.542707  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.543093  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.543126  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.543310  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.543532  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.543706  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.543887  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:04.632170  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:10:04.632250  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:10:04.659155  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:10:04.659223  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 03:10:04.686224  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:10:04.686301  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:10:04.713436  927850 provision.go:87] duration metric: took 281.06682ms to configureAuth
	I0308 03:10:04.713465  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:10:04.713725  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:04.713826  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.716380  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.716812  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.716844  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.717039  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.717252  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.717478  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.717660  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.717816  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.718001  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.718018  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:10:04.998622  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:10:04.998654  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:10:04.998683  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetURL
	I0308 03:10:05.000139  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using libvirt version 6000000
	I0308 03:10:05.002291  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.002632  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.002667  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.002812  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:10:05.002828  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:10:05.002838  927850 client.go:171] duration metric: took 25.194633539s to LocalClient.Create
	I0308 03:10:05.002869  927850 start.go:167] duration metric: took 25.194706452s to libmachine.API.Create "ha-576225"
	I0308 03:10:05.002883  927850 start.go:293] postStartSetup for "ha-576225-m02" (driver="kvm2")
	I0308 03:10:05.002897  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:10:05.002933  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.003208  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:10:05.003238  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.005697  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.006069  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.006100  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.006233  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.006426  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.006618  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.006809  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.092684  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:10:05.097731  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:10:05.097767  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:10:05.097854  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:10:05.097956  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:10:05.097971  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:10:05.098068  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:10:05.109290  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:10:05.136257  927850 start.go:296] duration metric: took 133.359869ms for postStartSetup
	I0308 03:10:05.136308  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:10:05.136953  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:05.139714  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.140120  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.140157  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.140359  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:10:05.140540  927850 start.go:128] duration metric: took 25.352471686s to createHost
	I0308 03:10:05.140562  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.142815  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.143181  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.143213  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.143365  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.143541  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.143709  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.143869  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.144099  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:05.144317  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:05.144332  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:10:05.254384  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867405.226468429
	
	I0308 03:10:05.254421  927850 fix.go:216] guest clock: 1709867405.226468429
	I0308 03:10:05.254433  927850 fix.go:229] Guest: 2024-03-08 03:10:05.226468429 +0000 UTC Remote: 2024-03-08 03:10:05.14055208 +0000 UTC m=+84.893491005 (delta=85.916349ms)
	I0308 03:10:05.254457  927850 fix.go:200] guest clock delta is within tolerance: 85.916349ms
	I0308 03:10:05.254464  927850 start.go:83] releasing machines lock for "ha-576225-m02", held for 25.466484706s
	I0308 03:10:05.254490  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.254868  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:05.257667  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.258151  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.258186  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.260484  927850 out.go:177] * Found network options:
	I0308 03:10:05.261947  927850 out.go:177]   - NO_PROXY=192.168.39.251
	W0308 03:10:05.263198  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:10:05.263246  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.263770  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.263994  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.264087  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:10:05.264129  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	W0308 03:10:05.264237  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:10:05.264350  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:10:05.264382  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.266761  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267094  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267133  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.267159  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267326  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.267452  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.267478  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267532  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.267616  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.267704  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.267767  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.267848  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.267897  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.268026  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.519905  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:10:05.526939  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:10:05.527008  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:10:05.544584  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:10:05.544612  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:10:05.544695  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:10:05.563315  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:10:05.577946  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:10:05.578002  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:10:05.592325  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:10:05.607078  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:10:05.744285  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:10:05.935003  927850 docker.go:233] disabling docker service ...
	I0308 03:10:05.935087  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:10:05.951624  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:10:05.965500  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:10:06.096777  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:10:06.228954  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:10:06.244692  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:10:06.265652  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:10:06.265759  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.277177  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:10:06.277255  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.288480  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.299390  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.310274  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:10:06.321396  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:10:06.331343  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:10:06.331404  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:10:06.344851  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:10:06.354486  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:06.478622  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:10:06.625431  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:10:06.625522  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:10:06.630783  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:10:06.630850  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:10:06.635051  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:10:06.675945  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:10:06.676022  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:10:06.709412  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:10:06.740409  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:10:06.741755  927850 out.go:177]   - env NO_PROXY=192.168.39.251
	I0308 03:10:06.742884  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:06.745660  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:06.745995  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:06.746018  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:06.746319  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:10:06.750763  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:10:06.764553  927850 mustload.go:65] Loading cluster: ha-576225
	I0308 03:10:06.764726  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:06.765007  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:06.765035  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:06.779680  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0308 03:10:06.780134  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:06.780636  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:06.780659  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:06.781019  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:06.781208  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:10:06.782879  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:10:06.783158  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:06.783189  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:06.797495  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0308 03:10:06.797980  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:06.798455  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:06.798476  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:06.798773  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:06.798958  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:10:06.799106  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.128
	I0308 03:10:06.799125  927850 certs.go:194] generating shared ca certs ...
	I0308 03:10:06.799144  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:06.799270  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:10:06.799308  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:10:06.799319  927850 certs.go:256] generating profile certs ...
	I0308 03:10:06.799385  927850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:10:06.799410  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907
	I0308 03:10:06.799424  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.254]
	I0308 03:10:07.059503  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 ...
	I0308 03:10:07.059536  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907: {Name:mk4518f2838cb83538c6e1c972800ca0fb4818ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:07.059710  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907 ...
	I0308 03:10:07.059724  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907: {Name:mk8e30d5c74032633160373e582b2bd039ca9f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:07.059795  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:10:07.059930  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:10:07.060074  927850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:10:07.060092  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:10:07.060104  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:10:07.060118  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:10:07.060130  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:10:07.060146  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:10:07.060157  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:10:07.060167  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:10:07.060176  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:10:07.060223  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:10:07.060254  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:10:07.060264  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:10:07.060285  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:10:07.060307  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:10:07.060330  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:10:07.060366  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:10:07.060391  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.060404  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.060416  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.060450  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:10:07.063350  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:07.063811  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:10:07.063849  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:07.064050  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:10:07.064263  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:10:07.064441  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:10:07.064580  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:10:07.145650  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0308 03:10:07.151560  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0308 03:10:07.164450  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0308 03:10:07.169489  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0308 03:10:07.181017  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0308 03:10:07.189382  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0308 03:10:07.203506  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0308 03:10:07.208641  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0308 03:10:07.223311  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0308 03:10:07.228747  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0308 03:10:07.245859  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0308 03:10:07.250855  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0308 03:10:07.263944  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:10:07.295869  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:10:07.325852  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:10:07.353308  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:10:07.379736  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0308 03:10:07.406994  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:10:07.433332  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:10:07.460120  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:10:07.486225  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:10:07.511795  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:10:07.538188  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:10:07.569432  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0308 03:10:07.593900  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0308 03:10:07.612561  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0308 03:10:07.631212  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0308 03:10:07.649664  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0308 03:10:07.668302  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0308 03:10:07.686140  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0308 03:10:07.703806  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:10:07.709839  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:10:07.721289  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.726266  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.726324  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.732304  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:10:07.743882  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:10:07.755295  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.760186  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.760244  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.766107  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:10:07.777291  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:10:07.788697  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.794094  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.794151  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.800025  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:10:07.811159  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:10:07.815692  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:10:07.815757  927850 kubeadm.go:928] updating node {m02 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0308 03:10:07.815879  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:10:07.815909  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:10:07.815936  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:10:07.815975  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:10:07.826637  927850 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0308 03:10:07.826683  927850 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0308 03:10:07.837110  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0308 03:10:07.837131  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:10:07.837188  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:10:07.837211  927850 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0308 03:10:07.837217  927850 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0308 03:10:07.841970  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 03:10:07.842000  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0308 03:10:08.978992  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:10:08.994317  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:10:08.994425  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:10:08.999797  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 03:10:08.999834  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0308 03:10:11.747758  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:10:11.747854  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:10:11.753769  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 03:10:11.753815  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0308 03:10:12.019264  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0308 03:10:12.031121  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0308 03:10:12.050893  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:10:12.070433  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:10:12.088453  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:10:12.092727  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:10:12.107128  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:12.255678  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:10:12.275756  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:10:12.276086  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:12.276114  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:12.291159  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0308 03:10:12.291608  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:12.292095  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:12.292121  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:12.292483  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:12.292697  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:10:12.292844  927850 start.go:316] joinCluster: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:10:12.292971  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 03:10:12.292992  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:10:12.296265  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:12.296708  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:10:12.296731  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:12.296911  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:10:12.297105  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:10:12.297270  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:10:12.297431  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:10:12.479835  927850 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:10:12.479894  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5tskky.r2mo3r85yoyvy2ry --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m02 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443"
	I0308 03:10:52.931150  927850 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5tskky.r2mo3r85yoyvy2ry --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m02 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443": (40.451218401s)
	I0308 03:10:52.931210  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 03:10:53.367922  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225-m02 minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=false
	I0308 03:10:53.491882  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-576225-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0308 03:10:53.623511  927850 start.go:318] duration metric: took 41.330661s to joinCluster
	I0308 03:10:53.623601  927850 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:10:53.625857  927850 out.go:177] * Verifying Kubernetes components...
	I0308 03:10:53.623924  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:53.627218  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:53.802553  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:10:53.820691  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:10:53.820977  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0308 03:10:53.821054  927850 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.251:8443
	I0308 03:10:53.821246  927850 node_ready.go:35] waiting up to 6m0s for node "ha-576225-m02" to be "Ready" ...
	I0308 03:10:53.821413  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:53.821422  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:53.821430  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:53.821433  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:53.831189  927850 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0308 03:10:54.322156  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:54.322178  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:54.322187  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:54.322191  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:54.328637  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:10:54.822207  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:54.822228  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:54.822237  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:54.822241  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:54.838652  927850 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0308 03:10:55.322040  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:55.322063  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:55.322072  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:55.322077  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:55.325920  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:55.821581  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:55.821608  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:55.821620  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:55.821625  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:55.825637  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:55.826162  927850 node_ready.go:53] node "ha-576225-m02" has status "Ready":"False"
	I0308 03:10:56.322524  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:56.322549  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:56.322558  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:56.322562  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:56.328145  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:10:56.821872  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:56.821895  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:56.821902  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:56.821906  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:56.824917  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:57.321946  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:57.321968  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:57.321976  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:57.321980  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:57.325866  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:57.821975  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:57.822006  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:57.822016  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:57.822020  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:57.825969  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:57.826687  927850 node_ready.go:53] node "ha-576225-m02" has status "Ready":"False"
	I0308 03:10:58.322118  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:58.322150  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:58.322159  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:58.322164  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:58.326088  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:58.821490  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:58.821516  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:58.821525  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:58.821529  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:58.825017  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.321723  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.321749  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.321761  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.321765  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.325337  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.821569  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.821590  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.821603  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.821612  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.825043  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.825842  927850 node_ready.go:49] node "ha-576225-m02" has status "Ready":"True"
	I0308 03:10:59.825863  927850 node_ready.go:38] duration metric: took 6.004571208s for node "ha-576225-m02" to be "Ready" ...
	I0308 03:10:59.825872  927850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:10:59.825969  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:10:59.825979  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.825987  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.825989  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.830823  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:10:59.836818  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.836894  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8qvhp
	I0308 03:10:59.836903  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.836910  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.836914  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.839755  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.840405  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.840423  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.840430  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.840434  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.842984  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.843708  927850 pod_ready.go:92] pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.843726  927850 pod_ready.go:81] duration metric: took 6.883358ms for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.843736  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.843790  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pqz96
	I0308 03:10:59.843801  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.843811  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.843835  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.846414  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.847169  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.847184  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.847190  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.847195  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.849510  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.850109  927850 pod_ready.go:92] pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.850132  927850 pod_ready.go:81] duration metric: took 6.388886ms for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.850144  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.850209  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225
	I0308 03:10:59.850220  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.850230  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.850236  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.852859  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.853417  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.853435  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.853441  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.853445  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.855891  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.856435  927850 pod_ready.go:92] pod "etcd-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.856449  927850 pod_ready.go:81] duration metric: took 6.293059ms for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.856457  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.856501  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m02
	I0308 03:10:59.856508  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.856515  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.856520  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.859426  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.860412  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.860427  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.860433  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.860436  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.864403  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.865367  927850 pod_ready.go:92] pod "etcd-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.865382  927850 pod_ready.go:81] duration metric: took 8.919794ms for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.865394  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.021743  927850 request.go:629] Waited for 156.268188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:11:00.021814  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:11:00.021819  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.021827  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.021831  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.025408  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.222580  927850 request.go:629] Waited for 196.401798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:00.222643  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:00.222647  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.222655  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.222659  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.226600  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.227235  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:00.227260  927850 pod_ready.go:81] duration metric: took 361.860232ms for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.227270  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.422084  927850 request.go:629] Waited for 194.716633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.422176  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.422188  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.422202  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.422212  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.425971  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.621638  927850 request.go:629] Waited for 194.290053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:00.621699  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:00.621704  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.621712  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.621716  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.625381  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.822344  927850 request.go:629] Waited for 94.327919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.822409  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.822416  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.822429  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.822439  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.825937  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.022140  927850 request.go:629] Waited for 195.398128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.022243  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.022254  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.022264  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.022269  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.026179  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.228037  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:01.228067  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.228079  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.228085  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.231964  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.422114  927850 request.go:629] Waited for 189.353352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.422177  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.422182  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.422190  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.422194  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.426751  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:01.728306  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:01.728342  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.728351  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.728357  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.732820  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:01.822083  927850 request.go:629] Waited for 87.699265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.822163  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.822177  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.822188  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.822194  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.825541  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:02.227502  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:02.227526  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.227534  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.227538  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.230944  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:02.231604  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:02.231621  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.231628  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.231632  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.234638  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:02.235397  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"False"
	I0308 03:11:02.728289  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:02.728314  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.728322  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.728327  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.732588  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:02.733650  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:02.733669  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.733680  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.733687  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.736738  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:03.227801  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:03.227829  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.227840  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.227845  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.231929  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:03.232757  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:03.232770  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.232778  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.232781  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.236055  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:03.728016  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:03.728044  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.728056  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.728062  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.734893  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:11:03.735870  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:03.735887  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.735894  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.735900  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.738588  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:04.227593  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:04.227617  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.227626  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.227629  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.231236  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.232176  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:04.232192  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.232202  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.232209  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.235355  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.236206  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"False"
	I0308 03:11:04.727591  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:04.727625  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.727634  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.727639  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.731688  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:04.732340  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:04.732357  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.732364  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.732369  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.735759  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.736889  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:04.736910  927850 pod_ready.go:81] duration metric: took 4.509633326s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.736920  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.736979  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225
	I0308 03:11:04.736992  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.737000  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.737007  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.740014  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:04.740555  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:04.740570  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.740577  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.740581  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.744105  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.744907  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:04.744923  927850 pod_ready.go:81] duration metric: took 7.997063ms for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.744932  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.821885  927850 request.go:629] Waited for 76.877856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:11:04.821977  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:11:04.821990  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.822001  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.822006  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.826196  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.022297  927850 request.go:629] Waited for 195.390269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.022380  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.022386  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.022395  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.022401  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.026876  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.027839  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.027864  927850 pod_ready.go:81] duration metric: took 282.922993ms for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.027879  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.222351  927850 request.go:629] Waited for 194.381308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:11:05.222432  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:11:05.222437  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.222445  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.222462  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.226185  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.421873  927850 request.go:629] Waited for 194.774695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:05.421949  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:05.421958  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.421969  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.421977  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.426262  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.427124  927850 pod_ready.go:92] pod "kube-proxy-pcmj2" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.427147  927850 pod_ready.go:81] duration metric: took 399.259295ms for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.427158  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.622186  927850 request.go:629] Waited for 194.942273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:11:05.622304  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:11:05.622311  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.622342  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.622353  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.625978  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.822060  927850 request.go:629] Waited for 195.367031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.822152  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.822165  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.822176  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.822185  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.825261  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.826003  927850 pod_ready.go:92] pod "kube-proxy-vjfqv" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.826027  927850 pod_ready.go:81] duration metric: took 398.861018ms for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.826040  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.022230  927850 request.go:629] Waited for 196.09097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:11:06.022335  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:11:06.022346  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.022357  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.022368  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.025832  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.222037  927850 request.go:629] Waited for 195.346155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:06.222095  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:06.222099  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.222107  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.222111  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.225424  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.226317  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:06.226335  927850 pod_ready.go:81] duration metric: took 400.288016ms for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.226348  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.422432  927850 request.go:629] Waited for 195.999355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:11:06.422535  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:11:06.422541  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.422549  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.422556  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.426177  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.622310  927850 request.go:629] Waited for 195.382474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:06.622426  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:06.622443  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.622459  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.622465  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.625751  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.626462  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:06.626496  927850 pod_ready.go:81] duration metric: took 400.136357ms for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.626525  927850 pod_ready.go:38] duration metric: took 6.800614949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:11:06.626568  927850 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:11:06.626728  927850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:11:06.643192  927850 api_server.go:72] duration metric: took 13.019514528s to wait for apiserver process to appear ...
	I0308 03:11:06.643216  927850 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:11:06.643236  927850 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0308 03:11:06.648033  927850 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0308 03:11:06.648100  927850 round_trippers.go:463] GET https://192.168.39.251:8443/version
	I0308 03:11:06.648108  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.648115  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.648122  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.651408  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.651652  927850 api_server.go:141] control plane version: v1.28.4
	I0308 03:11:06.651674  927850 api_server.go:131] duration metric: took 8.45101ms to wait for apiserver health ...
	I0308 03:11:06.651683  927850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:11:06.822013  927850 request.go:629] Waited for 170.250813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:06.822139  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:06.822150  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.822159  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.822169  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.828581  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:11:06.835213  927850 system_pods.go:59] 17 kube-system pods found
	I0308 03:11:06.835245  927850 system_pods.go:61] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:11:06.835253  927850 system_pods.go:61] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:11:06.835259  927850 system_pods.go:61] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:11:06.835263  927850 system_pods.go:61] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:11:06.835268  927850 system_pods.go:61] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:11:06.835272  927850 system_pods.go:61] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:11:06.835277  927850 system_pods.go:61] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:11:06.835285  927850 system_pods.go:61] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:11:06.835291  927850 system_pods.go:61] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:11:06.835299  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:11:06.835305  927850 system_pods.go:61] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:11:06.835310  927850 system_pods.go:61] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:11:06.835321  927850 system_pods.go:61] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:11:06.835332  927850 system_pods.go:61] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:11:06.835336  927850 system_pods.go:61] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:11:06.835340  927850 system_pods.go:61] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:11:06.835344  927850 system_pods.go:61] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:11:06.835352  927850 system_pods.go:74] duration metric: took 183.663118ms to wait for pod list to return data ...
	I0308 03:11:06.835363  927850 default_sa.go:34] waiting for default service account to be created ...
	I0308 03:11:07.021668  927850 request.go:629] Waited for 186.212568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:11:07.021737  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:11:07.021745  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.021764  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.021775  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.025514  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:07.025881  927850 default_sa.go:45] found service account: "default"
	I0308 03:11:07.025910  927850 default_sa.go:55] duration metric: took 190.535225ms for default service account to be created ...
	I0308 03:11:07.025923  927850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 03:11:07.222137  927850 request.go:629] Waited for 196.091036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:07.222218  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:07.222226  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.222239  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.222248  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.241058  927850 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0308 03:11:07.246418  927850 system_pods.go:86] 17 kube-system pods found
	I0308 03:11:07.246447  927850 system_pods.go:89] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:11:07.246457  927850 system_pods.go:89] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:11:07.246462  927850 system_pods.go:89] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:11:07.246466  927850 system_pods.go:89] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:11:07.246470  927850 system_pods.go:89] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:11:07.246474  927850 system_pods.go:89] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:11:07.246478  927850 system_pods.go:89] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:11:07.246482  927850 system_pods.go:89] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:11:07.246486  927850 system_pods.go:89] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:11:07.246490  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:11:07.246495  927850 system_pods.go:89] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:11:07.246498  927850 system_pods.go:89] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:11:07.246505  927850 system_pods.go:89] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:11:07.246509  927850 system_pods.go:89] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:11:07.246513  927850 system_pods.go:89] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:11:07.246517  927850 system_pods.go:89] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:11:07.246523  927850 system_pods.go:89] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:11:07.246529  927850 system_pods.go:126] duration metric: took 220.600615ms to wait for k8s-apps to be running ...
	I0308 03:11:07.246546  927850 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 03:11:07.246593  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:11:07.266495  927850 system_svc.go:56] duration metric: took 19.940564ms WaitForService to wait for kubelet
	I0308 03:11:07.266530  927850 kubeadm.go:576] duration metric: took 13.642854924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:11:07.266554  927850 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:11:07.422263  927850 request.go:629] Waited for 155.593577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes
	I0308 03:11:07.422320  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes
	I0308 03:11:07.422325  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.422332  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.422340  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.426232  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:07.427517  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:11:07.427553  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:11:07.427571  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:11:07.427577  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:11:07.427583  927850 node_conditions.go:105] duration metric: took 161.022579ms to run NodePressure ...
	I0308 03:11:07.427601  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:11:07.427632  927850 start.go:254] writing updated cluster config ...
	I0308 03:11:07.429792  927850 out.go:177] 
	I0308 03:11:07.431381  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:07.431517  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:07.433325  927850 out.go:177] * Starting "ha-576225-m03" control-plane node in "ha-576225" cluster
	I0308 03:11:07.434574  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:11:07.434598  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:11:07.434692  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:11:07.434704  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:11:07.434784  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:07.434982  927850 start.go:360] acquireMachinesLock for ha-576225-m03: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:11:07.435031  927850 start.go:364] duration metric: took 25.816µs to acquireMachinesLock for "ha-576225-m03"
	I0308 03:11:07.435050  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:11:07.435158  927850 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0308 03:11:07.437487  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:11:07.437569  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:07.437594  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:07.453229  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0308 03:11:07.453625  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:07.454136  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:07.454166  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:07.454474  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:07.454676  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:07.454862  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:07.455027  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:11:07.455052  927850 client.go:168] LocalClient.Create starting
	I0308 03:11:07.455098  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:11:07.455156  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:11:07.455179  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:11:07.455252  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:11:07.455283  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:11:07.455300  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:11:07.455326  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:11:07.455338  927850 main.go:141] libmachine: (ha-576225-m03) Calling .PreCreateCheck
	I0308 03:11:07.455513  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:07.455903  927850 main.go:141] libmachine: Creating machine...
	I0308 03:11:07.455939  927850 main.go:141] libmachine: (ha-576225-m03) Calling .Create
	I0308 03:11:07.456058  927850 main.go:141] libmachine: (ha-576225-m03) Creating KVM machine...
	I0308 03:11:07.457294  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found existing default KVM network
	I0308 03:11:07.457440  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found existing private KVM network mk-ha-576225
	I0308 03:11:07.457580  927850 main.go:141] libmachine: (ha-576225-m03) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 ...
	I0308 03:11:07.457604  927850 main.go:141] libmachine: (ha-576225-m03) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:11:07.457669  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.457559  928590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:11:07.457758  927850 main.go:141] libmachine: (ha-576225-m03) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:11:07.705383  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.705216  928590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa...
	I0308 03:11:07.778475  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.778328  928590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/ha-576225-m03.rawdisk...
	I0308 03:11:07.778529  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Writing magic tar header
	I0308 03:11:07.778548  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Writing SSH key tar header
	I0308 03:11:07.778561  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.778499  928590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 ...
	I0308 03:11:07.778721  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03
	I0308 03:11:07.778756  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:11:07.778773  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 (perms=drwx------)
	I0308 03:11:07.778786  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:11:07.778793  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:11:07.778801  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:11:07.778812  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:11:07.778835  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:11:07.778850  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:11:07.778860  927850 main.go:141] libmachine: (ha-576225-m03) Creating domain...
	I0308 03:11:07.778871  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:11:07.778886  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:11:07.778903  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:11:07.778916  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home
	I0308 03:11:07.778927  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Skipping /home - not owner
	I0308 03:11:07.779938  927850 main.go:141] libmachine: (ha-576225-m03) define libvirt domain using xml: 
	I0308 03:11:07.779969  927850 main.go:141] libmachine: (ha-576225-m03) <domain type='kvm'>
	I0308 03:11:07.779981  927850 main.go:141] libmachine: (ha-576225-m03)   <name>ha-576225-m03</name>
	I0308 03:11:07.779994  927850 main.go:141] libmachine: (ha-576225-m03)   <memory unit='MiB'>2200</memory>
	I0308 03:11:07.780004  927850 main.go:141] libmachine: (ha-576225-m03)   <vcpu>2</vcpu>
	I0308 03:11:07.780015  927850 main.go:141] libmachine: (ha-576225-m03)   <features>
	I0308 03:11:07.780029  927850 main.go:141] libmachine: (ha-576225-m03)     <acpi/>
	I0308 03:11:07.780040  927850 main.go:141] libmachine: (ha-576225-m03)     <apic/>
	I0308 03:11:07.780053  927850 main.go:141] libmachine: (ha-576225-m03)     <pae/>
	I0308 03:11:07.780064  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780077  927850 main.go:141] libmachine: (ha-576225-m03)   </features>
	I0308 03:11:07.780093  927850 main.go:141] libmachine: (ha-576225-m03)   <cpu mode='host-passthrough'>
	I0308 03:11:07.780101  927850 main.go:141] libmachine: (ha-576225-m03)   
	I0308 03:11:07.780110  927850 main.go:141] libmachine: (ha-576225-m03)   </cpu>
	I0308 03:11:07.780118  927850 main.go:141] libmachine: (ha-576225-m03)   <os>
	I0308 03:11:07.780128  927850 main.go:141] libmachine: (ha-576225-m03)     <type>hvm</type>
	I0308 03:11:07.780140  927850 main.go:141] libmachine: (ha-576225-m03)     <boot dev='cdrom'/>
	I0308 03:11:07.780154  927850 main.go:141] libmachine: (ha-576225-m03)     <boot dev='hd'/>
	I0308 03:11:07.780173  927850 main.go:141] libmachine: (ha-576225-m03)     <bootmenu enable='no'/>
	I0308 03:11:07.780186  927850 main.go:141] libmachine: (ha-576225-m03)   </os>
	I0308 03:11:07.780197  927850 main.go:141] libmachine: (ha-576225-m03)   <devices>
	I0308 03:11:07.780208  927850 main.go:141] libmachine: (ha-576225-m03)     <disk type='file' device='cdrom'>
	I0308 03:11:07.780223  927850 main.go:141] libmachine: (ha-576225-m03)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/boot2docker.iso'/>
	I0308 03:11:07.780237  927850 main.go:141] libmachine: (ha-576225-m03)       <target dev='hdc' bus='scsi'/>
	I0308 03:11:07.780254  927850 main.go:141] libmachine: (ha-576225-m03)       <readonly/>
	I0308 03:11:07.780268  927850 main.go:141] libmachine: (ha-576225-m03)     </disk>
	I0308 03:11:07.780297  927850 main.go:141] libmachine: (ha-576225-m03)     <disk type='file' device='disk'>
	I0308 03:11:07.780315  927850 main.go:141] libmachine: (ha-576225-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:11:07.780335  927850 main.go:141] libmachine: (ha-576225-m03)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/ha-576225-m03.rawdisk'/>
	I0308 03:11:07.780350  927850 main.go:141] libmachine: (ha-576225-m03)       <target dev='hda' bus='virtio'/>
	I0308 03:11:07.780362  927850 main.go:141] libmachine: (ha-576225-m03)     </disk>
	I0308 03:11:07.780377  927850 main.go:141] libmachine: (ha-576225-m03)     <interface type='network'>
	I0308 03:11:07.780390  927850 main.go:141] libmachine: (ha-576225-m03)       <source network='mk-ha-576225'/>
	I0308 03:11:07.780403  927850 main.go:141] libmachine: (ha-576225-m03)       <model type='virtio'/>
	I0308 03:11:07.780427  927850 main.go:141] libmachine: (ha-576225-m03)     </interface>
	I0308 03:11:07.780449  927850 main.go:141] libmachine: (ha-576225-m03)     <interface type='network'>
	I0308 03:11:07.780462  927850 main.go:141] libmachine: (ha-576225-m03)       <source network='default'/>
	I0308 03:11:07.780474  927850 main.go:141] libmachine: (ha-576225-m03)       <model type='virtio'/>
	I0308 03:11:07.780485  927850 main.go:141] libmachine: (ha-576225-m03)     </interface>
	I0308 03:11:07.780495  927850 main.go:141] libmachine: (ha-576225-m03)     <serial type='pty'>
	I0308 03:11:07.780510  927850 main.go:141] libmachine: (ha-576225-m03)       <target port='0'/>
	I0308 03:11:07.780524  927850 main.go:141] libmachine: (ha-576225-m03)     </serial>
	I0308 03:11:07.780534  927850 main.go:141] libmachine: (ha-576225-m03)     <console type='pty'>
	I0308 03:11:07.780545  927850 main.go:141] libmachine: (ha-576225-m03)       <target type='serial' port='0'/>
	I0308 03:11:07.780556  927850 main.go:141] libmachine: (ha-576225-m03)     </console>
	I0308 03:11:07.780567  927850 main.go:141] libmachine: (ha-576225-m03)     <rng model='virtio'>
	I0308 03:11:07.780583  927850 main.go:141] libmachine: (ha-576225-m03)       <backend model='random'>/dev/random</backend>
	I0308 03:11:07.780597  927850 main.go:141] libmachine: (ha-576225-m03)     </rng>
	I0308 03:11:07.780609  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780618  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780626  927850 main.go:141] libmachine: (ha-576225-m03)   </devices>
	I0308 03:11:07.780636  927850 main.go:141] libmachine: (ha-576225-m03) </domain>
	I0308 03:11:07.780644  927850 main.go:141] libmachine: (ha-576225-m03) 
	I0308 03:11:07.787525  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:5a:cf:77 in network default
	I0308 03:11:07.788496  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:07.788514  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring networks are active...
	I0308 03:11:07.789377  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring network default is active
	I0308 03:11:07.789748  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring network mk-ha-576225 is active
	I0308 03:11:07.790211  927850 main.go:141] libmachine: (ha-576225-m03) Getting domain xml...
	I0308 03:11:07.791003  927850 main.go:141] libmachine: (ha-576225-m03) Creating domain...
	I0308 03:11:09.000039  927850 main.go:141] libmachine: (ha-576225-m03) Waiting to get IP...
	I0308 03:11:09.000875  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.001266  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.001330  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.001258  928590 retry.go:31] will retry after 216.744664ms: waiting for machine to come up
	I0308 03:11:09.220137  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.220744  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.220799  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.220673  928590 retry.go:31] will retry after 344.32551ms: waiting for machine to come up
	I0308 03:11:09.566272  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.566783  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.566814  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.566721  928590 retry.go:31] will retry after 418.834054ms: waiting for machine to come up
	I0308 03:11:09.987101  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.987623  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.987654  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.987563  928590 retry.go:31] will retry after 368.096971ms: waiting for machine to come up
	I0308 03:11:10.357008  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:10.357499  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:10.357525  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:10.357447  928590 retry.go:31] will retry after 735.02061ms: waiting for machine to come up
	I0308 03:11:11.094424  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:11.094943  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:11.094976  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:11.094880  928590 retry.go:31] will retry after 803.752614ms: waiting for machine to come up
	I0308 03:11:11.900117  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:11.900627  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:11.900655  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:11.900567  928590 retry.go:31] will retry after 853.28583ms: waiting for machine to come up
	I0308 03:11:12.755426  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:12.755964  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:12.756037  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:12.755952  928590 retry.go:31] will retry after 1.409037774s: waiting for machine to come up
	I0308 03:11:14.166667  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:14.167183  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:14.167236  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:14.167106  928590 retry.go:31] will retry after 1.591994181s: waiting for machine to come up
	I0308 03:11:15.760930  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:15.761465  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:15.761493  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:15.761405  928590 retry.go:31] will retry after 1.956770276s: waiting for machine to come up
	I0308 03:11:17.720344  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:17.720835  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:17.720859  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:17.720808  928590 retry.go:31] will retry after 2.308480723s: waiting for machine to come up
	I0308 03:11:20.030491  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:20.030991  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:20.031022  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:20.030944  928590 retry.go:31] will retry after 2.597182441s: waiting for machine to come up
	I0308 03:11:22.629604  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:22.630066  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:22.630089  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:22.630013  928590 retry.go:31] will retry after 4.489691082s: waiting for machine to come up
	I0308 03:11:27.123686  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:27.124120  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:27.124139  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:27.124081  928590 retry.go:31] will retry after 3.754931444s: waiting for machine to come up
	I0308 03:11:30.882410  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.883076  927850 main.go:141] libmachine: (ha-576225-m03) Found IP for machine: 192.168.39.17
	I0308 03:11:30.883097  927850 main.go:141] libmachine: (ha-576225-m03) Reserving static IP address...
	I0308 03:11:30.883107  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.883708  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find host DHCP lease matching {name: "ha-576225-m03", mac: "52:54:00:e1:8f:ef", ip: "192.168.39.17"} in network mk-ha-576225
	I0308 03:11:30.959126  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Getting to WaitForSSH function...
	I0308 03:11:30.959170  927850 main.go:141] libmachine: (ha-576225-m03) Reserved static IP address: 192.168.39.17
	I0308 03:11:30.959182  927850 main.go:141] libmachine: (ha-576225-m03) Waiting for SSH to be available...
	I0308 03:11:30.962115  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.962668  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:30.962694  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.962923  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using SSH client type: external
	I0308 03:11:30.962945  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa (-rw-------)
	I0308 03:11:30.962970  927850 main.go:141] libmachine: (ha-576225-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:11:30.962984  927850 main.go:141] libmachine: (ha-576225-m03) DBG | About to run SSH command:
	I0308 03:11:30.963002  927850 main.go:141] libmachine: (ha-576225-m03) DBG | exit 0
	I0308 03:11:31.089401  927850 main.go:141] libmachine: (ha-576225-m03) DBG | SSH cmd err, output: <nil>: 
	I0308 03:11:31.089707  927850 main.go:141] libmachine: (ha-576225-m03) KVM machine creation complete!
	I0308 03:11:31.090110  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:31.090881  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:31.091116  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:31.091322  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:11:31.091340  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:11:31.092835  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:11:31.092851  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:11:31.092859  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:11:31.092868  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.095343  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.095733  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.095764  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.095907  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.096070  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.096240  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.096398  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.096647  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.096936  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.096953  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:11:31.201096  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:11:31.201125  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:11:31.201133  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.204396  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.204790  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.204829  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.204971  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.205195  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.205402  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.205549  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.205729  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.205900  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.205913  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:11:31.311129  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:11:31.311255  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:11:31.311277  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:11:31.311290  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.311591  927850 buildroot.go:166] provisioning hostname "ha-576225-m03"
	I0308 03:11:31.311624  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.311842  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.314524  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.314965  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.314987  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.315176  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.315383  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.315558  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.315724  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.315904  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.316067  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.316079  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225-m03 && echo "ha-576225-m03" | sudo tee /etc/hostname
	I0308 03:11:31.433376  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225-m03
	
	I0308 03:11:31.433407  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.436250  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.436767  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.436799  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.436969  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.437218  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.437428  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.437604  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.437836  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.438010  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.438033  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:11:31.553621  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:11:31.553655  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:11:31.553678  927850 buildroot.go:174] setting up certificates
	I0308 03:11:31.553692  927850 provision.go:84] configureAuth start
	I0308 03:11:31.553706  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.554061  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:31.556667  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.557080  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.557122  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.557329  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.559741  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.560035  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.560066  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.560184  927850 provision.go:143] copyHostCerts
	I0308 03:11:31.560224  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:11:31.560268  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:11:31.560277  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:11:31.560370  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:11:31.560475  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:11:31.560504  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:11:31.560517  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:11:31.560555  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:11:31.560627  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:11:31.560647  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:11:31.560654  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:11:31.560677  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:11:31.560729  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225-m03 san=[127.0.0.1 192.168.39.17 ha-576225-m03 localhost minikube]
	I0308 03:11:32.027224  927850 provision.go:177] copyRemoteCerts
	I0308 03:11:32.027298  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:11:32.027324  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.030029  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.030410  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.030441  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.030639  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.030859  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.031038  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.031225  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.112944  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:11:32.113014  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:11:32.141177  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:11:32.141264  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 03:11:32.170370  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:11:32.170430  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 03:11:32.197884  927850 provision.go:87] duration metric: took 644.176956ms to configureAuth
	I0308 03:11:32.197915  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:11:32.198159  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:32.198253  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.202754  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.203255  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.203287  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.203477  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.203691  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.203915  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.204124  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.204346  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:32.204564  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:32.204582  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:11:32.494880  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:11:32.494907  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:11:32.494916  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetURL
	I0308 03:11:32.496428  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using libvirt version 6000000
	I0308 03:11:32.499346  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.499789  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.499827  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.500112  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:11:32.500131  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:11:32.500140  927850 client.go:171] duration metric: took 25.04507583s to LocalClient.Create
	I0308 03:11:32.500168  927850 start.go:167] duration metric: took 25.045143066s to libmachine.API.Create "ha-576225"
	I0308 03:11:32.500179  927850 start.go:293] postStartSetup for "ha-576225-m03" (driver="kvm2")
	I0308 03:11:32.500189  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:11:32.500206  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.500461  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:11:32.500493  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.502835  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.503257  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.503287  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.503472  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.503664  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.503859  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.503980  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.590684  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:11:32.595651  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:11:32.595684  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:11:32.595762  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:11:32.595872  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:11:32.595888  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:11:32.595999  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:11:32.607362  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:11:32.638187  927850 start.go:296] duration metric: took 137.992115ms for postStartSetup
	I0308 03:11:32.638244  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:32.638850  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:32.641586  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.642000  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.642032  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.642284  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:32.642552  927850 start.go:128] duration metric: took 25.207373987s to createHost
	I0308 03:11:32.642588  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.644980  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.645363  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.645386  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.645565  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.645768  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.645922  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.646081  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.646298  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:32.646511  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:32.646535  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:11:32.750541  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867492.732176517
	
	I0308 03:11:32.750570  927850 fix.go:216] guest clock: 1709867492.732176517
	I0308 03:11:32.750581  927850 fix.go:229] Guest: 2024-03-08 03:11:32.732176517 +0000 UTC Remote: 2024-03-08 03:11:32.642570633 +0000 UTC m=+172.395509561 (delta=89.605884ms)
	I0308 03:11:32.750606  927850 fix.go:200] guest clock delta is within tolerance: 89.605884ms
	I0308 03:11:32.750613  927850 start.go:83] releasing machines lock for "ha-576225-m03", held for 25.315572264s
	I0308 03:11:32.750637  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.750969  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:32.753597  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.753922  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.753947  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.756408  927850 out.go:177] * Found network options:
	I0308 03:11:32.757804  927850 out.go:177]   - NO_PROXY=192.168.39.251,192.168.39.128
	W0308 03:11:32.759109  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 03:11:32.759134  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:11:32.759150  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759630  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759803  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759935  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:11:32.759988  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	W0308 03:11:32.760084  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 03:11:32.760107  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:11:32.760196  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:11:32.760221  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.762779  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763225  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763266  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.763288  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763374  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.763591  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.763647  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.763675  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763785  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.763882  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.763983  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.764016  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.764134  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.764282  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:33.008382  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:11:33.017209  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:11:33.017313  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:11:33.037249  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:11:33.037290  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:11:33.037378  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:11:33.055104  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:11:33.070739  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:11:33.070810  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:11:33.085894  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:11:33.102069  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:11:33.231998  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:11:33.385442  927850 docker.go:233] disabling docker service ...
	I0308 03:11:33.385507  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:11:33.403675  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:11:33.419868  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:11:33.570788  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:11:33.702817  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:11:33.720244  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:11:33.742357  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:11:33.742427  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.754938  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:11:33.754988  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.767118  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.779178  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.790949  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:11:33.804101  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:11:33.814949  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:11:33.814998  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:11:33.829548  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:11:33.840326  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:11:33.957615  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:11:34.114582  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:11:34.114681  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:11:34.120233  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:11:34.120290  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:11:34.124705  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:11:34.171114  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:11:34.171214  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:11:34.208566  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:11:34.243311  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:11:34.244885  927850 out.go:177]   - env NO_PROXY=192.168.39.251
	I0308 03:11:34.246353  927850 out.go:177]   - env NO_PROXY=192.168.39.251,192.168.39.128
	I0308 03:11:34.247669  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:34.250669  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:34.251065  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:34.251094  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:34.251353  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:11:34.256292  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:11:34.270302  927850 mustload.go:65] Loading cluster: ha-576225
	I0308 03:11:34.270571  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:34.270842  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:34.270882  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:34.287147  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0308 03:11:34.287662  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:34.288187  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:34.288213  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:34.288624  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:34.288859  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:11:34.290820  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:11:34.291180  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:34.291223  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:34.305635  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0308 03:11:34.306060  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:34.306610  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:34.306645  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:34.306983  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:34.307198  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:11:34.307371  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.17
	I0308 03:11:34.307382  927850 certs.go:194] generating shared ca certs ...
	I0308 03:11:34.307397  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.307518  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:11:34.307556  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:11:34.307565  927850 certs.go:256] generating profile certs ...
	I0308 03:11:34.307657  927850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:11:34.307686  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1
	I0308 03:11:34.307698  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.17 192.168.39.254]
	I0308 03:11:34.473425  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 ...
	I0308 03:11:34.473460  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1: {Name:mk490d533f12bd08746b8a0548aa53b8f0e67c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.473629  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1 ...
	I0308 03:11:34.473647  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1: {Name:mk1651ac3b4b39cba47a5428730acc2b58c791b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.473723  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:11:34.473856  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:11:34.474067  927850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:11:34.474091  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:11:34.474107  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:11:34.474120  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:11:34.474133  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:11:34.474143  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:11:34.474155  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:11:34.474165  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:11:34.474179  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:11:34.474226  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:11:34.474263  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:11:34.474273  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:11:34.474293  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:11:34.474317  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:11:34.474337  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:11:34.474373  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:11:34.474409  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:11:34.474423  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:11:34.474435  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:34.474470  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:11:34.477717  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:34.478085  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:11:34.478117  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:34.478266  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:11:34.478441  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:11:34.478587  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:11:34.478712  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:11:34.557613  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0308 03:11:34.564076  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0308 03:11:34.578715  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0308 03:11:34.583722  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0308 03:11:34.603538  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0308 03:11:34.608841  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0308 03:11:34.626715  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0308 03:11:34.631764  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0308 03:11:34.645769  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0308 03:11:34.652430  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0308 03:11:34.667823  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0308 03:11:34.674509  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0308 03:11:34.691729  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:11:34.721230  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:11:34.747759  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:11:34.774333  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:11:34.801229  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0308 03:11:34.831188  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:11:34.859197  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:11:34.885848  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:11:34.912282  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:11:34.937959  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:11:34.963746  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:11:34.990951  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0308 03:11:35.010210  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0308 03:11:35.028687  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0308 03:11:35.046896  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0308 03:11:35.065386  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0308 03:11:35.083334  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0308 03:11:35.101637  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0308 03:11:35.119855  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:11:35.126819  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:11:35.140010  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.145696  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.145752  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.152185  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:11:35.164700  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:11:35.177680  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.184570  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.184623  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.192134  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:11:35.205079  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:11:35.218196  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.223208  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.223256  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.230210  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:11:35.242505  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:11:35.247494  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:11:35.247559  927850 kubeadm.go:928] updating node {m03 192.168.39.17 8443 v1.28.4 crio true true} ...
	I0308 03:11:35.247712  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:11:35.247755  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:11:35.247796  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:11:35.247840  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:11:35.260165  927850 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0308 03:11:35.260211  927850 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0308 03:11:35.271487  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0308 03:11:35.271547  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:11:35.271555  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0308 03:11:35.271574  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:11:35.271585  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0308 03:11:35.271627  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:11:35.271639  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:11:35.271647  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:11:35.276735  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 03:11:35.276760  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0308 03:11:35.323769  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 03:11:35.323776  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:11:35.323823  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0308 03:11:35.323903  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:11:35.372383  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 03:11:35.372427  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0308 03:11:36.323516  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0308 03:11:36.334579  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 03:11:36.353738  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:11:36.373834  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:11:36.392530  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:11:36.397837  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:11:36.412941  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:11:36.535957  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:11:36.558242  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:11:36.558597  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:36.558649  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:36.574890  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0308 03:11:36.575401  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:36.575971  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:36.576005  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:36.576382  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:36.576597  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:11:36.576771  927850 start.go:316] joinCluster: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:11:36.576945  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 03:11:36.576969  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:11:36.580127  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:36.580566  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:11:36.580598  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:36.580812  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:11:36.580996  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:11:36.581140  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:11:36.581286  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:11:36.759006  927850 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:11:36.759058  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dt9zvj.jo1ekfffapjlcpt7 --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I0308 03:12:05.219992  927850 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dt9zvj.jo1ekfffapjlcpt7 --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (28.460900188s)
	I0308 03:12:05.220036  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 03:12:05.862267  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225-m03 minikube.k8s.io/updated_at=2024_03_08T03_12_05_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=false
	I0308 03:12:05.995524  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-576225-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0308 03:12:06.139985  927850 start.go:318] duration metric: took 29.563204661s to joinCluster
	I0308 03:12:06.140076  927850 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:12:06.141255  927850 out.go:177] * Verifying Kubernetes components...
	I0308 03:12:06.142352  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:12:06.140411  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:12:06.473661  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:12:06.607643  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:12:06.608018  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0308 03:12:06.608128  927850 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.251:8443
	I0308 03:12:06.608464  927850 node_ready.go:35] waiting up to 6m0s for node "ha-576225-m03" to be "Ready" ...
	I0308 03:12:06.608601  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:06.608613  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:06.608623  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:06.608629  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:06.613476  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:07.108987  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:07.109012  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:07.109021  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:07.109024  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:07.113489  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:07.609611  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:07.609654  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:07.609667  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:07.609676  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:07.614855  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:08.109136  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:08.109159  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:08.109169  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:08.109174  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:08.112710  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:08.609205  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:08.609230  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:08.609238  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:08.609243  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:08.613299  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:08.614738  927850 node_ready.go:53] node "ha-576225-m03" has status "Ready":"False"
	I0308 03:12:09.109138  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:09.109171  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:09.109184  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:09.109192  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:09.114081  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:09.609108  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:09.609133  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:09.609142  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:09.609146  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:09.612790  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.109621  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:10.109651  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.109660  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.109664  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.115853  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:12:10.609123  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:10.609144  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.609153  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.609164  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.613175  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.614166  927850 node_ready.go:49] node "ha-576225-m03" has status "Ready":"True"
	I0308 03:12:10.614188  927850 node_ready.go:38] duration metric: took 4.005703177s for node "ha-576225-m03" to be "Ready" ...
	I0308 03:12:10.614198  927850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:12:10.614258  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:10.614267  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.614273  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.614280  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.623022  927850 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0308 03:12:10.630027  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.630131  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8qvhp
	I0308 03:12:10.630142  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.630149  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.630154  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.633099  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.633860  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.633878  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.633886  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.633890  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.636909  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.637573  927850 pod_ready.go:92] pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.637592  927850 pod_ready.go:81] duration metric: took 7.542544ms for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.637601  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.637661  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pqz96
	I0308 03:12:10.637670  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.637676  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.637683  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.640544  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.641337  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.641351  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.641359  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.641363  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.644006  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.644613  927850 pod_ready.go:92] pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.644629  927850 pod_ready.go:81] duration metric: took 7.0209ms for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.644637  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.644688  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225
	I0308 03:12:10.644696  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.644703  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.644705  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.647376  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.647921  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.647937  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.647944  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.647948  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.651034  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.651665  927850 pod_ready.go:92] pod "etcd-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.651684  927850 pod_ready.go:81] duration metric: took 7.040357ms for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.651695  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.651758  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m02
	I0308 03:12:10.651767  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.651777  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.651785  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.654568  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.655142  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:10.655161  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.655173  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.655181  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.657901  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.658409  927850 pod_ready.go:92] pod "etcd-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.658431  927850 pod_ready.go:81] duration metric: took 6.728336ms for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.658442  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.809840  927850 request.go:629] Waited for 151.319587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:10.809919  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:10.809926  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.809935  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.809945  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.814979  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:11.009925  927850 request.go:629] Waited for 194.218079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.010026  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.010038  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.010046  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.010051  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.013791  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.209544  927850 request.go:629] Waited for 50.248963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.209624  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.209633  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.209645  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.209655  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.213293  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.409351  927850 request.go:629] Waited for 195.315382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.409429  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.409439  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.409451  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.409459  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.414950  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:11.659366  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.659391  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.659404  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.659410  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.662970  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.809832  927850 request.go:629] Waited for 146.155336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.809915  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.809921  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.809929  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.809937  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.814164  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:12.159173  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:12.159204  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.159217  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.159222  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.163032  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.209462  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:12.209495  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.209504  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.209508  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.213094  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.659197  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:12.659224  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.659234  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.659240  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.662989  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.663966  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:12.663982  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.663989  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.663992  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.667169  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.667942  927850 pod_ready.go:102] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:13.159056  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:13.159081  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.159089  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.159094  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.162701  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:13.163430  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:13.163445  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.163452  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.163470  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.166687  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:13.659292  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:13.659317  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.659326  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.659331  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.663420  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:13.664337  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:13.664353  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.664360  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.664364  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.667368  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:14.159557  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:14.159587  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.159600  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.159605  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.163923  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.164807  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:14.164830  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.164841  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.164847  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.168939  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.658833  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:14.658890  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.658902  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.658908  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.663084  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.664159  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:14.664177  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.664184  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.664188  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.667419  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:14.668371  927850 pod_ready.go:102] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:15.159243  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:15.159266  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.159275  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.159281  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.163078  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:15.163734  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:15.163750  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.163757  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.163760  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.168506  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:15.659119  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:15.659145  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.659156  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.659162  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.663478  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:15.664478  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:15.664492  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.664500  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.664504  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.667813  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.158923  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:16.158951  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.158960  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.158964  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.162787  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.163510  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:16.163532  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.163544  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.163552  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.169472  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:16.658897  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:16.658918  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.658926  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.658929  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.662884  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.663730  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:16.663746  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.663754  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.663757  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.667100  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.667648  927850 pod_ready.go:92] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.667674  927850 pod_ready.go:81] duration metric: took 6.009223937s for pod "etcd-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.667694  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.667755  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:12:16.667765  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.667775  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.667782  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.671228  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.671999  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:16.672015  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.672022  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.672027  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.675065  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.675620  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.675642  927850 pod_ready.go:81] duration metric: took 7.93823ms for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.675654  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.675723  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:12:16.675732  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.675739  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.675743  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.678782  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.679529  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:16.679549  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.679559  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.679564  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.682503  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:16.683069  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.683085  927850 pod_ready.go:81] duration metric: took 7.416749ms for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.683093  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.809582  927850 request.go:629] Waited for 126.434854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:16.809657  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:16.809665  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.809673  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.809681  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.814238  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:17.009545  927850 request.go:629] Waited for 194.336517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.009624  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.009641  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.009652  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.009662  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.013125  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.210191  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:17.210221  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.210230  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.210234  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.213437  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.409365  927850 request.go:629] Waited for 195.326021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.409428  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.409433  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.409441  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.409445  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.412712  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.684031  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:17.684058  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.684066  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.684070  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.687840  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.810060  927850 request.go:629] Waited for 121.330314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.810141  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.810151  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.810161  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.810166  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.814919  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:18.183444  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:18.183484  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.183493  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.183496  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.187729  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:18.209863  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:18.209893  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.209904  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.209913  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.213732  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.683850  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:18.683875  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.683883  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.683887  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.687801  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.688889  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:18.688907  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.688915  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.688920  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.692757  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.694057  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:19.183449  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:19.183473  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.183481  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.183487  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.187961  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:19.189192  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:19.189216  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.189229  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.189236  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.192925  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:19.683679  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:19.683709  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.683718  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.683722  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.687602  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:19.688513  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:19.688533  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.688542  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.688547  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.692661  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:20.183275  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:20.183297  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.183306  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.183311  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.188008  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:20.189573  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:20.189597  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.189610  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.189616  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.193431  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.683299  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:20.683323  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.683330  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.683334  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.686816  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.687720  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:20.687740  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.687750  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.687754  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.691161  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.692062  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:20.692085  927850 pod_ready.go:81] duration metric: took 4.008983643s for pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.692099  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.692181  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225
	I0308 03:12:20.692193  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.692203  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.692256  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.696116  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.696802  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:20.696823  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.696834  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.696842  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.700077  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.700683  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:20.700707  927850 pod_ready.go:81] duration metric: took 8.599475ms for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.700720  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.810081  927850 request.go:629] Waited for 109.23929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:12:20.810175  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:12:20.810183  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.810193  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.810204  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.814972  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.009133  927850 request.go:629] Waited for 193.223791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:21.009211  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:21.009223  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.009231  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.009235  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.013361  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.014100  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.014123  927850 pod_ready.go:81] duration metric: took 313.394468ms for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.014138  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.209143  927850 request.go:629] Waited for 194.924117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m03
	I0308 03:12:21.209228  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m03
	I0308 03:12:21.209236  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.209246  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.209262  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.213302  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.409339  927850 request.go:629] Waited for 195.303192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.409430  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.409437  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.409449  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.409457  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.414090  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.414729  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.414749  927850 pod_ready.go:81] duration metric: took 400.602928ms for pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.414761  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gqc9f" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.610182  927850 request.go:629] Waited for 195.322305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gqc9f
	I0308 03:12:21.610249  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gqc9f
	I0308 03:12:21.610255  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.610262  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.610270  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.614335  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.809352  927850 request.go:629] Waited for 194.313013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.809447  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.809457  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.809465  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.809469  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.813130  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:21.813626  927850 pod_ready.go:92] pod "kube-proxy-gqc9f" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.813651  927850 pod_ready.go:81] duration metric: took 398.880333ms for pod "kube-proxy-gqc9f" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.813664  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.010228  927850 request.go:629] Waited for 196.450548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:12:22.010311  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:12:22.010324  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.010336  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.010343  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.014603  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:22.210014  927850 request.go:629] Waited for 194.37125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:22.210112  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:22.210119  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.210129  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.210160  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.213783  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:22.214460  927850 pod_ready.go:92] pod "kube-proxy-pcmj2" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:22.214487  927850 pod_ready.go:81] duration metric: took 400.8134ms for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.214503  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.410068  927850 request.go:629] Waited for 195.476035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:12:22.410188  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:12:22.410202  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.410216  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.410222  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.414262  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:22.609139  927850 request.go:629] Waited for 194.283786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:22.609250  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:22.609263  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.609288  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.609295  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.612617  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:22.613243  927850 pod_ready.go:92] pod "kube-proxy-vjfqv" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:22.613287  927850 pod_ready.go:81] duration metric: took 398.759086ms for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.613302  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.809207  927850 request.go:629] Waited for 195.786947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:12:22.809297  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:12:22.809306  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.809315  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.809319  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.813232  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.009295  927850 request.go:629] Waited for 195.287024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:23.009365  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:23.009372  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.009383  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.009391  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.013698  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:23.014272  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.014293  927850 pod_ready.go:81] duration metric: took 400.984379ms for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.014302  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.209399  927850 request.go:629] Waited for 195.012698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:12:23.209480  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:12:23.209485  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.209502  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.209511  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.213523  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.409989  927850 request.go:629] Waited for 195.367607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:23.410072  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:23.410080  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.410092  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.410113  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.413885  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.414628  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.414668  927850 pod_ready.go:81] duration metric: took 400.35686ms for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.414680  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.609610  927850 request.go:629] Waited for 194.848328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m03
	I0308 03:12:23.609683  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m03
	I0308 03:12:23.609688  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.609696  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.609700  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.613726  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.809995  927850 request.go:629] Waited for 195.322339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:23.810090  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:23.810101  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.810114  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.810123  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.815020  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:23.815865  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.815889  927850 pod_ready.go:81] duration metric: took 401.202158ms for pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.815904  927850 pod_ready.go:38] duration metric: took 13.201695841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:12:23.815923  927850 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:12:23.815993  927850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:12:23.834635  927850 api_server.go:72] duration metric: took 17.694513051s to wait for apiserver process to appear ...
	I0308 03:12:23.834667  927850 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:12:23.834686  927850 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0308 03:12:23.846970  927850 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0308 03:12:23.847059  927850 round_trippers.go:463] GET https://192.168.39.251:8443/version
	I0308 03:12:23.847070  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.847097  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.847109  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.848426  927850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 03:12:23.848488  927850 api_server.go:141] control plane version: v1.28.4
	I0308 03:12:23.848502  927850 api_server.go:131] duration metric: took 13.827518ms to wait for apiserver health ...
	I0308 03:12:23.848514  927850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:12:24.009793  927850 request.go:629] Waited for 161.190738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.009892  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.009904  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.009919  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.009927  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.017361  927850 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0308 03:12:24.023982  927850 system_pods.go:59] 24 kube-system pods found
	I0308 03:12:24.024011  927850 system_pods.go:61] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:12:24.024015  927850 system_pods.go:61] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:12:24.024019  927850 system_pods.go:61] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:12:24.024023  927850 system_pods.go:61] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:12:24.024027  927850 system_pods.go:61] "etcd-ha-576225-m03" [0116b1fc-b67f-4b77-b0df-2e467f872a40] Running
	I0308 03:12:24.024029  927850 system_pods.go:61] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:12:24.024033  927850 system_pods.go:61] "kindnet-j425g" [12209f2c-d279-4280-bb13-fe49af81cfea] Running
	I0308 03:12:24.024037  927850 system_pods.go:61] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:12:24.024042  927850 system_pods.go:61] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:12:24.024048  927850 system_pods.go:61] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:12:24.024055  927850 system_pods.go:61] "kube-apiserver-ha-576225-m03" [75efc1d4-9ebb-4e79-bb4f-1cbc58b7114f] Running
	I0308 03:12:24.024061  927850 system_pods.go:61] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:12:24.024073  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:12:24.024078  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m03" [d86f869b-b8bc-4f8b-b039-d73f36b2c29c] Running
	I0308 03:12:24.024084  927850 system_pods.go:61] "kube-proxy-gqc9f" [ef6598e1-d792-44b3-b0a7-4ce4b80b67d8] Running
	I0308 03:12:24.024091  927850 system_pods.go:61] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:12:24.024095  927850 system_pods.go:61] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:12:24.024101  927850 system_pods.go:61] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:12:24.024104  927850 system_pods.go:61] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:12:24.024110  927850 system_pods.go:61] "kube-scheduler-ha-576225-m03" [d0dc5765-5042-4946-888a-19a4e65ecf2e] Running
	I0308 03:12:24.024113  927850 system_pods.go:61] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:12:24.024117  927850 system_pods.go:61] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:12:24.024120  927850 system_pods.go:61] "kube-vip-ha-576225-m03" [59018698-49da-41e2-b4a5-9825edc8ae87] Running
	I0308 03:12:24.024125  927850 system_pods.go:61] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:12:24.024132  927850 system_pods.go:74] duration metric: took 175.610989ms to wait for pod list to return data ...
	I0308 03:12:24.024143  927850 default_sa.go:34] waiting for default service account to be created ...
	I0308 03:12:24.209584  927850 request.go:629] Waited for 185.351941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:12:24.209648  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:12:24.209653  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.209662  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.209675  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.213799  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:24.213952  927850 default_sa.go:45] found service account: "default"
	I0308 03:12:24.213972  927850 default_sa.go:55] duration metric: took 189.816018ms for default service account to be created ...
	I0308 03:12:24.213983  927850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 03:12:24.409209  927850 request.go:629] Waited for 195.138277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.409289  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.409297  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.409308  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.409323  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.416504  927850 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0308 03:12:24.425442  927850 system_pods.go:86] 24 kube-system pods found
	I0308 03:12:24.425469  927850 system_pods.go:89] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:12:24.425475  927850 system_pods.go:89] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:12:24.425479  927850 system_pods.go:89] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:12:24.425483  927850 system_pods.go:89] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:12:24.425487  927850 system_pods.go:89] "etcd-ha-576225-m03" [0116b1fc-b67f-4b77-b0df-2e467f872a40] Running
	I0308 03:12:24.425492  927850 system_pods.go:89] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:12:24.425496  927850 system_pods.go:89] "kindnet-j425g" [12209f2c-d279-4280-bb13-fe49af81cfea] Running
	I0308 03:12:24.425504  927850 system_pods.go:89] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:12:24.425512  927850 system_pods.go:89] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:12:24.425516  927850 system_pods.go:89] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:12:24.425523  927850 system_pods.go:89] "kube-apiserver-ha-576225-m03" [75efc1d4-9ebb-4e79-bb4f-1cbc58b7114f] Running
	I0308 03:12:24.425528  927850 system_pods.go:89] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:12:24.425535  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:12:24.425539  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m03" [d86f869b-b8bc-4f8b-b039-d73f36b2c29c] Running
	I0308 03:12:24.425546  927850 system_pods.go:89] "kube-proxy-gqc9f" [ef6598e1-d792-44b3-b0a7-4ce4b80b67d8] Running
	I0308 03:12:24.425552  927850 system_pods.go:89] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:12:24.425558  927850 system_pods.go:89] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:12:24.425562  927850 system_pods.go:89] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:12:24.425568  927850 system_pods.go:89] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:12:24.425572  927850 system_pods.go:89] "kube-scheduler-ha-576225-m03" [d0dc5765-5042-4946-888a-19a4e65ecf2e] Running
	I0308 03:12:24.425578  927850 system_pods.go:89] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:12:24.425582  927850 system_pods.go:89] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:12:24.425588  927850 system_pods.go:89] "kube-vip-ha-576225-m03" [59018698-49da-41e2-b4a5-9825edc8ae87] Running
	I0308 03:12:24.425592  927850 system_pods.go:89] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:12:24.425601  927850 system_pods.go:126] duration metric: took 211.612108ms to wait for k8s-apps to be running ...
	I0308 03:12:24.425609  927850 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 03:12:24.425655  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:12:24.444036  927850 system_svc.go:56] duration metric: took 18.418896ms WaitForService to wait for kubelet
	I0308 03:12:24.444065  927850 kubeadm.go:576] duration metric: took 18.303949873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:12:24.444104  927850 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:12:24.609516  927850 request.go:629] Waited for 165.336121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes
	I0308 03:12:24.609597  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes
	I0308 03:12:24.609602  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.609610  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.609616  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.614024  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:24.615227  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615252  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615263  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615267  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615271  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615274  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615278  927850 node_conditions.go:105] duration metric: took 171.169138ms to run NodePressure ...
	I0308 03:12:24.615290  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:12:24.615311  927850 start.go:254] writing updated cluster config ...
	I0308 03:12:24.615596  927850 ssh_runner.go:195] Run: rm -f paused
	I0308 03:12:24.671690  927850 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 03:12:24.673822  927850 out.go:177] * Done! kubectl is now configured to use "ha-576225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.900694559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da3c8e19-f920-4967-8faa-81cffcb99336 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.902455046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d88b646-0e8d-423c-a0ea-de7b6ef81c4a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.902870061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867750902850400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d88b646-0e8d-423c-a0ea-de7b6ef81c4a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.903827631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9fce330-781f-44b1-befa-c41def75bfc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.903909410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9fce330-781f-44b1-befa-c41def75bfc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.906444695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9fce330-781f-44b1-befa-c41def75bfc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.946467635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccb0fdb6-8927-47dd-8fe8-7bdc9057d892 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.946557904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccb0fdb6-8927-47dd-8fe8-7bdc9057d892 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.954818743Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98eaab9e-434d-4ea4-bd1e-2732415150c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.955051325Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-9594n,Uid:d8bc0fba-1a5c-4082-a505-a0653c59180a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867546071948510,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:12:25.749871868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8qvhp,Uid:7686e8de-1f0a-4952-822a-22e888b17da3,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1709867383030500069,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:42.688652735Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867383030054852,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{ku
bectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T03:09:42.697505082Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-pqz96,Uid:e2bf0fdf-7908-4600-8e88-7496688efb0d,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1709867383010490630,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:42.695910860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&PodSandboxMetadata{Name:kindnet-dxqvf,Uid:68b9ef4f-0693-425c-b9e5-3232abe019b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867378771511784,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annota
tions:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:38.437897749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&PodSandboxMetadata{Name:kube-proxy-pcmj2,Uid:43be60bc-c064-4f45-9653-15b886260114,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867378760052541,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:38.419906077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&PodSandboxMetadata{Name:etcd-ha-576225,Uid:26cdb4c7afaf223219da4d02f01a1ea4,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1709867358960654669,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.251:2379,kubernetes.io/config.hash: 26cdb4c7afaf223219da4d02f01a1ea4,kubernetes.io/config.seen: 2024-03-08T03:09:18.435084423Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-576225,Uid:fb9fc89b7fdb50461eab2dcf2451250e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358952981636,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b
7fdb50461eab2dcf2451250e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.251:8443,kubernetes.io/config.hash: fb9fc89b7fdb50461eab2dcf2451250e,kubernetes.io/config.seen: 2024-03-08T03:09:18.435085785Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-576225,Uid:b43f1b4602f1b00b137428ffec94b74a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358944269694,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b43f1b4602f1b00b137428ffec94b74a,kubernetes.io/config.seen: 2024-03-08T03:09:18.435086681Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-576225,Uid:af200b4f08e9aba6d5619bb32fa9f733,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358933251081,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: af200b4f08e9aba6d5619bb32fa9f733,kubernetes.io/config.seen: 2024-03-08T03:09:18.435079820Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-576225,Uid:79332678c9cff5037e42e087635740e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358928007895,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{kubernetes.io/config.hash: 79332678c9cff5037e42e087635740e0,kubernetes.io/config.seen: 2024-03-08T03:09:18.435083364Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=98eaab9e-434d-4ea4-bd1e-2732415150c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.955661271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d6437a3-ed34-4f69-bbee-e11c19b21770 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.956062759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867750956042981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d6437a3-ed34-4f69-bbee-e11c19b21770 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.956481787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2563ce3c-1d13-4e50-88a5-fa1fafd3a119 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.956530931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2563ce3c-1d13-4e50-88a5-fa1fafd3a119 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.956741114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations
:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058
991457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2563ce3c-1d13-4e50-88a5-fa1fafd3a119 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.957230395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd0279bd-54c3-4f2e-a1ac-eb8f88137cc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.957273330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd0279bd-54c3-4f2e-a1ac-eb8f88137cc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:50 ha-576225 crio[675]: time="2024-03-08 03:15:50.957589408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd0279bd-54c3-4f2e-a1ac-eb8f88137cc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.006668548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a3203d6-189f-477f-bfbf-2feb20a3a472 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.006761636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a3203d6-189f-477f-bfbf-2feb20a3a472 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.007990694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bb8b86a-c108-469f-a222-d611a832ef61 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.008575900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867751008548023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bb8b86a-c108-469f-a222-d611a832ef61 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.009477498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=442f3662-bb66-4349-b081-4c12ec28f716 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.009609378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=442f3662-bb66-4349-b081-4c12ec28f716 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:15:51 ha-576225 crio[675]: time="2024-03-08 03:15:51.009892406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=442f3662-bb66-4349-b081-4c12ec28f716 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5282718f03eb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0524f01439e2f       busybox-5b5d89c9d6-9594n
	6dcd572cdc4ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   2f7897e64ae10       storage-provisioner
	c751323fea4d9       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   a6b1803470779       kube-vip-ha-576225
	00534de89b2ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       0                   2f7897e64ae10       storage-provisioner
	c29d3c09ae3c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   632fde5a7793c       coredns-5dd5756b68-8qvhp
	e6551e5e70b01       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   5d9f21a723332       coredns-5dd5756b68-pqz96
	6775e52109dca       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   88d456c41e9f6       kindnet-dxqvf
	da2c9bb706201       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   9f60642cbf5af       kube-proxy-pcmj2
	31099fe894975       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   a6b1803470779       kube-vip-ha-576225
	79db3710d20d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   5b9d25fbfde63       etcd-ha-576225
	556a4677df889       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   9d1b14daf08ee       kube-controller-manager-ha-576225
	fe007de6550da       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   2e14d9826288f       kube-apiserver-ha-576225
	77dc7f2494354       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   7a8444878ab4c       kube-scheduler-ha-576225
	
	
	==> coredns [c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788] <==
	[INFO] 10.244.0.4:57715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185202s
	[INFO] 10.244.0.4:58493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187997s
	[INFO] 10.244.0.4:51494 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142605s
	[INFO] 10.244.0.4:36385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003322395s
	[INFO] 10.244.0.4:39290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119187s
	[INFO] 10.244.0.4:54781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156597s
	[INFO] 10.244.2.2:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156855s
	[INFO] 10.244.2.2:51544 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122332s
	[INFO] 10.244.2.2:36974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001216836s
	[INFO] 10.244.2.2:46648 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079695s
	[INFO] 10.244.2.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116087s
	[INFO] 10.244.1.2:55081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181347s
	[INFO] 10.244.1.2:33288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001414035s
	[INFO] 10.244.1.2:34740 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200343s
	[INFO] 10.244.1.2:34593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089308s
	[INFO] 10.244.0.4:57556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168693s
	[INFO] 10.244.0.4:55624 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070785s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203686s
	[INFO] 10.244.2.2:38702 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143629s
	[INFO] 10.244.2.2:39439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:41980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276421s
	[INFO] 10.244.0.4:55612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118127s
	[INFO] 10.244.0.4:54270 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081257s
	[INFO] 10.244.2.2:49847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192089s
	[INFO] 10.244.2.2:45358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198525s
	
	
	==> coredns [e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df] <==
	[INFO] 10.244.1.2:40496 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000539144s
	[INFO] 10.244.1.2:44875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001406973s
	[INFO] 10.244.0.4:34507 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002484084s
	[INFO] 10.244.0.4:41817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000191005s
	[INFO] 10.244.2.2:46018 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001768234s
	[INFO] 10.244.2.2:44074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245211s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020143s
	[INFO] 10.244.1.2:36967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124177s
	[INFO] 10.244.1.2:49099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135326s
	[INFO] 10.244.1.2:38253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253563s
	[INFO] 10.244.1.2:39140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097524s
	[INFO] 10.244.0.4:50886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066375s
	[INFO] 10.244.0.4:36001 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044745s
	[INFO] 10.244.2.2:52701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189269s
	[INFO] 10.244.1.2:56384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178001s
	[INFO] 10.244.1.2:57745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181456s
	[INFO] 10.244.1.2:36336 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125903s
	[INFO] 10.244.0.4:51847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152568s
	[INFO] 10.244.0.4:40398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222601s
	[INFO] 10.244.2.2:39215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179733s
	[INFO] 10.244.2.2:44810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018976s
	[INFO] 10.244.1.2:53930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169054s
	[INFO] 10.244.1.2:39490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132254s
	[INFO] 10.244.1.2:45653 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129104s
	[INFO] 10.244.1.2:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154053s
	
	
	==> describe nodes <==
	Name:               ha-576225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:15:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-576225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1732a5e385cf44ce86b216e3f63b18e9
	  System UUID:                1732a5e3-85cf-44ce-86b2-16e3f63b18e9
	  Boot ID:                    22459aef-7ea9-46db-b507-1fb97d6edacd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9594n             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 coredns-5dd5756b68-8qvhp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 coredns-5dd5756b68-pqz96             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 etcd-ha-576225                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-dxqvf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m13s
	  kube-system                 kube-apiserver-ha-576225             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-576225    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-pcmj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-scheduler-ha-576225             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-576225                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m22s                  kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s                  kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s                  kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal  NodeReady                6m9s                   kubelet          Node ha-576225 status is now: NodeReady
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal  RegisteredNode           3m31s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	
	
	Name:               ha-576225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:10:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:13:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-576225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 852d29792aec4a87b8b6c74704738411
	  System UUID:                852d2979-2aec-4a87-b8b6-c74704738411
	  Boot ID:                    7dd1b7b9-6e88-4666-a7ad-564e8cd548ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wlj7r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-576225-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-w8zww                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m15s
	  kube-system                 kube-apiserver-ha-576225-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-576225-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-vjfqv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-scheduler-ha-576225-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-vip-ha-576225-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m56s  kube-proxy       
	  Normal  RegisteredNode  5m14s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode  4m45s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode  3m31s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  NodeNotReady    104s   node-controller  Node ha-576225-m02 status is now: NodeNotReady
	
	
	Name:               ha-576225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_12_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:12:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:15:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-576225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e53bc87ed31a4387be9c7b928f4e70cd
	  System UUID:                e53bc87e-d31a-4387-be9c-7b928f4e70cd
	  Boot ID:                    48eba781-e477-4452-8326-e60054c38dbb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc27d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-ha-576225-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-j425g                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-apiserver-ha-576225-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-576225-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-gqc9f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-scheduler-ha-576225-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-vip-ha-576225-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m47s  kube-proxy       
	  Normal  RegisteredNode  3m49s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal  RegisteredNode  3m45s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal  RegisteredNode  3m31s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	
	
	Name:               ha-576225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_13_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:15:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-576225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 524efacfa67040b0afe359afd19efdd6
	  System UUID:                524efacf-a670-40b0-afe3-59afd19efdd6
	  Boot ID:                    d890d781-2a80-445d-89e7-43c2432b0da3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5qbg6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m50s
	  kube-system                 kube-proxy-mk2g8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x5 over 2m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x5 over 2m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x5 over 2m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  RegisteredNode           2m46s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-576225-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar 8 03:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051989] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042634] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.518416] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.422136] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.681949] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 8 03:09] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.056257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063726] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.163955] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153131] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264990] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.215071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060445] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.086248] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.235554] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.086526] kauditd_printk_skb: 40 callbacks suppressed
	[  +2.541733] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[ +10.298670] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.185227] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025] <==
	{"level":"warn","ts":"2024-03-08T03:15:51.307469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.316582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.321467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.337472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.34614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.354559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.359014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.361955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.37117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.376218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.378044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.41065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.418574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.423414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.437569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.453567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.472546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.476554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.477018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.481821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.48891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.497256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.503842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.568728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:15:51.576538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:15:51 up 7 min,  0 users,  load average: 0.32, 0.53, 0.30
	Linux ha-576225 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e] <==
	I0308 03:15:11.879819       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:15:21.893170       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:15:21.893469       1 main.go:227] handling current node
	I0308 03:15:21.893710       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:15:21.893793       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:15:21.894008       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:15:21.894076       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:15:21.894214       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:15:21.894268       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:15:31.902857       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:15:31.902902       1 main.go:227] handling current node
	I0308 03:15:31.902912       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:15:31.902918       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:15:31.903520       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:15:31.903603       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:15:31.903672       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:15:31.903752       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:15:41.909658       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:15:41.909716       1 main.go:227] handling current node
	I0308 03:15:41.909726       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:15:41.909732       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:15:41.909851       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:15:41.909884       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:15:41.909939       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:15:41.909944       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c] <==
	Trace[975446308]:  ---"Txn call completed" 3879ms (03:10:51.511)]
	Trace[975446308]: ---"About to apply patch" 3880ms (03:10:51.511)
	Trace[975446308]: [3.88270775s] [3.88270775s] END
	I0308 03:10:51.513812       1 trace.go:236] Trace[1006107015]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a0d397e-8eaf-48c9-9e1b-eb336f6c6341,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-576225,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (08-Mar-2024 03:10:47.181) (total time: 4332ms):
	Trace[1006107015]: ["GuaranteedUpdate etcd3" audit-id:4a0d397e-8eaf-48c9-9e1b-eb336f6c6341,key:/leases/kube-node-lease/ha-576225,type:*coordination.Lease,resource:leases.coordination.k8s.io 4332ms (03:10:47.181)
	Trace[1006107015]:  ---"Txn call completed" 4331ms (03:10:51.513)]
	Trace[1006107015]: [4.332477664s] [4.332477664s] END
	I0308 03:10:51.515522       1 trace.go:236] Trace[726453465]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b72982c3-a6e8-4744-925c-1e32e2f6783b,client:192.168.39.128,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:45.247) (total time: 6267ms):
	Trace[726453465]: ["Create etcd3" audit-id:b72982c3-a6e8-4744-925c-1e32e2f6783b,key:/events/kube-system/kube-vip-ha-576225-m02.17baab603a97f594,type:*core.Event,resource:events 6267ms (03:10:45.248)
	Trace[726453465]:  ---"Txn call succeeded" 6266ms (03:10:51.515)]
	Trace[726453465]: [6.267573919s] [6.267573919s] END
	I0308 03:10:51.555174       1 trace.go:236] Trace[1361706867]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0f4c7967-9609-4262-af3b-7069631c5b78,client:192.168.39.128,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-576225-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (08-Mar-2024 03:10:47.116) (total time: 4438ms):
	Trace[1361706867]: ["GuaranteedUpdate etcd3" audit-id:0f4c7967-9609-4262-af3b-7069631c5b78,key:/minions/ha-576225-m02,type:*core.Node,resource:nodes 4438ms (03:10:47.116)
	Trace[1361706867]:  ---"Txn call completed" 4393ms (03:10:51.511)
	Trace[1361706867]:  ---"Txn call completed" 41ms (03:10:51.554)]
	Trace[1361706867]: ---"About to apply patch" 4393ms (03:10:51.511)
	Trace[1361706867]: ---"Object stored in database" 41ms (03:10:51.554)
	Trace[1361706867]: [4.43839163s] [4.43839163s] END
	I0308 03:10:51.572082       1 trace.go:236] Trace[520816267]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a32672da-c798-4df5-a30a-db78d2ee4bc1,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:45.885) (total time: 5686ms):
	Trace[520816267]: [5.686200268s] [5.686200268s] END
	I0308 03:10:51.576675       1 trace.go:236] Trace[1188707554]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:53521e53-bcfd-42b6-b12c-4ccc13f6573d,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:44.397) (total time: 7178ms):
	Trace[1188707554]: [7.178975243s] [7.178975243s] END
	I0308 03:10:51.580468       1 trace.go:236] Trace[1763030422]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:64626611-9a16-40fe-a10f-c16277898ecc,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:46.399) (total time: 5181ms):
	Trace[1763030422]: [5.181389259s] [5.181389259s] END
	W0308 03:13:34.357866       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.251]
	
	
	==> kube-controller-manager [556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2] <==
	E0308 03:13:00.040221       1 certificate_controller.go:146] Sync csr-f5xth failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-f5xth": the object has been modified; please apply your changes to the latest version and try again
	E0308 03:13:00.058600       1 certificate_controller.go:146] Sync csr-f5xth failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-f5xth": the object has been modified; please apply your changes to the latest version and try again
	I0308 03:13:01.551010       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-576225-m04\" does not exist"
	I0308 03:13:01.590116       1 range_allocator.go:380] "Set node PodCIDR" node="ha-576225-m04" podCIDRs=["10.244.3.0/24"]
	I0308 03:13:01.623631       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tt2g5"
	I0308 03:13:01.630279       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5qbg6"
	I0308 03:13:01.727660       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-k68g4"
	I0308 03:13:01.754785       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-tt2g5"
	I0308 03:13:01.818548       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qbtrf"
	I0308 03:13:01.867541       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-sv66p"
	I0308 03:13:02.540010       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225-m04"
	I0308 03:13:02.540304       1 event.go:307] "Event occurred" object="ha-576225-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller"
	I0308 03:13:09.084526       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-576225-m04"
	I0308 03:14:07.573676       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-576225-m04"
	I0308 03:14:07.575859       1 event.go:307] "Event occurred" object="ha-576225-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-576225-m02 status is now: NodeNotReady"
	I0308 03:14:07.600972       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.620729       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.637292       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.651460       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-wlj7r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.672970       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vjfqv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.681554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.654915ms"
	I0308 03:14:07.682418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="114.514µs"
	I0308 03:14:07.720625       1 event.go:307] "Event occurred" object="kube-system/kindnet-w8zww" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.745980       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.776956       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176] <==
	I0308 03:09:39.528881       1 server_others.go:69] "Using iptables proxy"
	I0308 03:09:39.543990       1 node.go:141] Successfully retrieved node IP: 192.168.39.251
	I0308 03:09:39.609748       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:09:39.609788       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:09:39.612456       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:09:39.612921       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:09:39.613100       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:09:39.613144       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:09:39.614717       1 config.go:188] "Starting service config controller"
	I0308 03:09:39.615182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:09:39.615246       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:09:39.615253       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:09:39.616111       1 config.go:315] "Starting node config controller"
	I0308 03:09:39.616145       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:09:39.716286       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:09:39.719403       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:09:39.719425       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446] <==
	W0308 03:09:22.701875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:09:22.702015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:09:23.513890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 03:09:23.513999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 03:09:23.530275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:09:23.530459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:09:23.592639       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:09:23.592722       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:09:23.593942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:09:23.593994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:09:23.794105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 03:09:23.794127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 03:09:23.930026       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:09:23.930102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0308 03:09:25.382141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0308 03:12:25.760214       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-cc27d\": pod busybox-5b5d89c9d6-cc27d is already assigned to node \"ha-576225-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-cc27d" node="ha-576225-m03"
	E0308 03:12:25.760792       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 568c3895-25ab-4967-bebd-d0bbb9203ec4(default/busybox-5b5d89c9d6-cc27d) wasn't assumed so cannot be forgotten"
	E0308 03:12:25.760883       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-cc27d\": pod busybox-5b5d89c9d6-cc27d is already assigned to node \"ha-576225-m03\"" pod="default/busybox-5b5d89c9d6-cc27d"
	I0308 03:12:25.760951       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-cc27d" node="ha-576225-m03"
	E0308 03:13:01.674843       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5qbg6\": pod kindnet-5qbg6 is already assigned to node \"ha-576225-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5qbg6" node="ha-576225-m04"
	E0308 03:13:01.674979       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 8f4975bf-f49e-4f05-b5f7-f8e9fc419bbe(kube-system/kindnet-5qbg6) wasn't assumed so cannot be forgotten"
	E0308 03:13:01.675041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5qbg6\": pod kindnet-5qbg6 is already assigned to node \"ha-576225-m04\"" pod="kube-system/kindnet-5qbg6"
	I0308 03:13:01.675101       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5qbg6" node="ha-576225-m04"
	E0308 03:13:01.675915       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tt2g5\": pod kube-proxy-tt2g5 is already assigned to node \"ha-576225-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tt2g5" node="ha-576225-m04"
	E0308 03:13:01.676051       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tt2g5\": pod kube-proxy-tt2g5 is already assigned to node \"ha-576225-m04\"" pod="kube-system/kube-proxy-tt2g5"
	
	
	==> kubelet <==
	Mar 08 03:11:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:11:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:11:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:12:25 ha-576225 kubelet[1359]: I0308 03:12:25.750175    1359 topology_manager.go:215] "Topology Admit Handler" podUID="d8bc0fba-1a5c-4082-a505-a0653c59180a" podNamespace="default" podName="busybox-5b5d89c9d6-9594n"
	Mar 08 03:12:25 ha-576225 kubelet[1359]: I0308 03:12:25.819164    1359 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5g9\" (UniqueName: \"kubernetes.io/projected/d8bc0fba-1a5c-4082-a505-a0653c59180a-kube-api-access-2t5g9\") pod \"busybox-5b5d89c9d6-9594n\" (UID: \"d8bc0fba-1a5c-4082-a505-a0653c59180a\") " pod="default/busybox-5b5d89c9d6-9594n"
	Mar 08 03:12:29 ha-576225 kubelet[1359]: E0308 03:12:29.006281    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:12:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:13:29 ha-576225 kubelet[1359]: E0308 03:13:29.008936    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:13:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:14:29 ha-576225 kubelet[1359]: E0308 03:14:29.005243    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:14:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:15:29 ha-576225 kubelet[1359]: E0308 03:15:29.004474    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:15:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-576225 -n ha-576225
helpers_test.go:261: (dbg) Run:  kubectl --context ha-576225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (142.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (56.37s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (3.193520713s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:15:56.193644  932259 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:15:56.193750  932259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:56.193759  932259 out.go:304] Setting ErrFile to fd 2...
	I0308 03:15:56.193763  932259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:15:56.193959  932259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:15:56.194113  932259 out.go:298] Setting JSON to false
	I0308 03:15:56.194141  932259 mustload.go:65] Loading cluster: ha-576225
	I0308 03:15:56.194193  932259 notify.go:220] Checking for updates...
	I0308 03:15:56.195975  932259 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:15:56.196002  932259 status.go:255] checking status of ha-576225 ...
	I0308 03:15:56.196443  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.196515  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.212091  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0308 03:15:56.212524  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.213058  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.213082  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.213496  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.213669  932259 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:15:56.215296  932259 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:15:56.215314  932259 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:15:56.215641  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.215688  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.230138  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0308 03:15:56.230521  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.230969  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.230992  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.231381  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.231606  932259 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:15:56.234373  932259 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:56.234819  932259 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:15:56.234856  932259 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:56.234984  932259 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:15:56.235373  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.235433  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.250472  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0308 03:15:56.250816  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.251274  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.251297  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.251644  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.251866  932259 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:15:56.252151  932259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:56.252179  932259 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:15:56.254773  932259 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:56.255183  932259 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:15:56.255212  932259 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:15:56.255332  932259 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:15:56.255490  932259 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:15:56.255632  932259 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:15:56.255774  932259 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:15:56.347709  932259 ssh_runner.go:195] Run: systemctl --version
	I0308 03:15:56.354553  932259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:56.370405  932259 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:15:56.370433  932259 api_server.go:166] Checking apiserver status ...
	I0308 03:15:56.370466  932259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:15:56.384742  932259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:15:56.394947  932259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:15:56.394998  932259 ssh_runner.go:195] Run: ls
	I0308 03:15:56.399720  932259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:15:56.405366  932259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:15:56.405392  932259 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:15:56.405406  932259 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:56.405435  932259 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:15:56.405843  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.405882  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.422240  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0308 03:15:56.422763  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.423325  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.423354  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.423680  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.423885  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:15:56.425395  932259 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:15:56.425413  932259 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:15:56.425686  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.425724  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.439998  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0308 03:15:56.440384  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.440835  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.440864  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.441210  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.441431  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:15:56.444326  932259 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:56.444751  932259 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:15:56.444783  932259 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:56.444905  932259 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:15:56.445211  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:56.445252  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:56.459888  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0308 03:15:56.460387  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:56.460840  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:56.460860  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:56.461166  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:56.461433  932259 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:15:56.461632  932259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:56.461654  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:15:56.464234  932259 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:56.464639  932259 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:15:56.464662  932259 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:15:56.464844  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:15:56.465022  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:15:56.465177  932259 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:15:56.465410  932259 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:15:58.961633  932259 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:15:58.961741  932259 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:15:58.961765  932259 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:15:58.961778  932259 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:15:58.961803  932259 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:15:58.961834  932259 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:15:58.962160  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:58.962219  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:58.979154  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I0308 03:15:58.979688  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:58.980336  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:58.980368  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:58.980754  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:58.981057  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:15:58.982851  932259 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:15:58.982876  932259 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:15:58.983156  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:58.983207  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:58.999258  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0308 03:15:58.999677  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:59.000181  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:59.000204  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:59.000567  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:59.000773  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:15:59.003417  932259 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:59.003994  932259 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:15:59.004047  932259 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:59.004152  932259 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:15:59.004478  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:59.004524  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:59.018618  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0308 03:15:59.019064  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:59.019602  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:59.019632  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:59.019938  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:59.020089  932259 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:15:59.020284  932259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:59.020305  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:15:59.022932  932259 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:59.023390  932259 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:15:59.023417  932259 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:15:59.023534  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:15:59.023698  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:15:59.023839  932259 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:15:59.024000  932259 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:15:59.107695  932259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:59.125189  932259 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:15:59.125216  932259 api_server.go:166] Checking apiserver status ...
	I0308 03:15:59.125251  932259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:15:59.141570  932259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:15:59.153630  932259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:15:59.153688  932259 ssh_runner.go:195] Run: ls
	I0308 03:15:59.158847  932259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:15:59.163962  932259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:15:59.163994  932259 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:15:59.164007  932259 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:15:59.164035  932259 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:15:59.164384  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:59.164432  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:59.179742  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0308 03:15:59.180170  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:59.180729  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:59.180760  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:59.181117  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:59.181410  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:15:59.183341  932259 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:15:59.183360  932259 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:15:59.183764  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:59.183856  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:59.199260  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38091
	I0308 03:15:59.199647  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:59.200077  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:59.200107  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:59.200474  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:59.200664  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:15:59.203900  932259 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:59.204382  932259 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:15:59.204426  932259 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:59.204507  932259 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:15:59.204933  932259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:15:59.205002  932259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:15:59.220263  932259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0308 03:15:59.220636  932259 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:15:59.221155  932259 main.go:141] libmachine: Using API Version  1
	I0308 03:15:59.221182  932259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:15:59.221537  932259 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:15:59.221736  932259 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:15:59.221896  932259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:15:59.221919  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:15:59.224505  932259 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:59.224931  932259 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:15:59.224960  932259 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:15:59.225080  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:15:59.225255  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:15:59.225492  932259 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:15:59.225672  932259 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:15:59.313895  932259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:15:59.329779  932259 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (4.907791608s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:00.632052  932355 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:00.632375  932355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:00.632417  932355 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:00.632425  932355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:00.632907  932355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:00.633199  932355 out.go:298] Setting JSON to false
	I0308 03:16:00.633231  932355 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:00.633558  932355 notify.go:220] Checking for updates...
	I0308 03:16:00.634169  932355 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:00.634215  932355 status.go:255] checking status of ha-576225 ...
	I0308 03:16:00.634656  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.634733  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.654159  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0308 03:16:00.654631  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.655158  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.655188  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.655530  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.655763  932355 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:00.657452  932355 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:00.657470  932355 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:00.657736  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.657773  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.672387  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38157
	I0308 03:16:00.672744  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.673181  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.673204  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.673656  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.673923  932355 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:00.676685  932355 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:00.677151  932355 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:00.677186  932355 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:00.677300  932355 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:00.677581  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.677615  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.692168  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44611
	I0308 03:16:00.692581  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.693076  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.693099  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.693444  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.693639  932355 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:00.693813  932355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:00.693842  932355 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:00.696189  932355 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:00.696601  932355 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:00.696626  932355 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:00.696759  932355 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:00.696938  932355 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:00.697095  932355 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:00.697299  932355 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:00.787300  932355 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:00.796907  932355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:00.813929  932355 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:00.813962  932355 api_server.go:166] Checking apiserver status ...
	I0308 03:16:00.814011  932355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:00.828011  932355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:00.840905  932355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:00.840974  932355 ssh_runner.go:195] Run: ls
	I0308 03:16:00.845990  932355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:00.850442  932355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:00.850463  932355 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:00.850473  932355 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:00.850489  932355 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:00.850837  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.850879  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.866468  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0308 03:16:00.866925  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.867406  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.867448  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.867783  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.867979  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:00.869722  932355 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:16:00.869741  932355 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:00.870006  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.870041  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.884996  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I0308 03:16:00.885380  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.885832  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.885853  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.886175  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.886386  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:16:00.889206  932355 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:00.889641  932355 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:00.889668  932355 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:00.889804  932355 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:00.890093  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:00.890135  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:00.905367  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0308 03:16:00.905806  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:00.906237  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:00.906262  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:00.906574  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:00.906733  932355 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:16:00.907024  932355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:00.907054  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:16:00.909903  932355 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:00.910413  932355 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:00.910443  932355 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:00.910591  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:16:00.910769  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:16:00.910930  932355 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:16:00.911150  932355 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:16:02.033581  932355 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:02.033645  932355 retry.go:31] will retry after 141.636613ms: dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:05.109594  932355 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:05.109719  932355 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:16:05.109748  932355 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:05.109758  932355 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:16:05.109793  932355 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:05.109803  932355 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:05.110181  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.110243  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.125362  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0308 03:16:05.125853  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.126421  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.126454  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.126850  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.127101  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:05.128786  932355 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:05.128805  932355 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:05.129086  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.129124  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.144572  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0308 03:16:05.144943  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.145434  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.145457  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.145825  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.146002  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:05.148841  932355 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:05.149256  932355 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:05.149301  932355 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:05.149419  932355 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:05.149712  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.149746  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.164387  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0308 03:16:05.164737  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.165186  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.165212  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.165593  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.165785  932355 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:05.165998  932355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:05.166043  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:05.168765  932355 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:05.169159  932355 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:05.169187  932355 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:05.169425  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:05.169616  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:05.169762  932355 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:05.169921  932355 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:05.253709  932355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:05.270118  932355 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:05.270154  932355 api_server.go:166] Checking apiserver status ...
	I0308 03:16:05.270212  932355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:05.293329  932355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:05.306747  932355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:05.306792  932355 ssh_runner.go:195] Run: ls
	I0308 03:16:05.312626  932355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:05.318721  932355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:05.318750  932355 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:05.318763  932355 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:05.318784  932355 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:05.319183  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.319229  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.335123  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0308 03:16:05.335557  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.336089  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.336117  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.336455  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.336639  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:05.338210  932355 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:05.338230  932355 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:05.338661  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.338712  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.353032  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33587
	I0308 03:16:05.353456  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.354028  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.354056  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.354470  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.354671  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:05.357348  932355 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:05.357800  932355 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:05.357834  932355 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:05.357984  932355 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:05.358385  932355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:05.358438  932355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:05.372697  932355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0308 03:16:05.373109  932355 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:05.373598  932355 main.go:141] libmachine: Using API Version  1
	I0308 03:16:05.373624  932355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:05.373950  932355 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:05.374188  932355 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:05.374365  932355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:05.374387  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:05.376975  932355 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:05.377405  932355 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:05.377442  932355 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:05.377570  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:05.377774  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:05.377928  932355 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:05.378062  932355 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:05.461455  932355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:05.477949  932355 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (4.795226467s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:07.103785  932462 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:07.104104  932462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:07.104119  932462 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:07.104126  932462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:07.104389  932462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:07.104627  932462 out.go:298] Setting JSON to false
	I0308 03:16:07.104657  932462 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:07.104766  932462 notify.go:220] Checking for updates...
	I0308 03:16:07.105183  932462 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:07.105204  932462 status.go:255] checking status of ha-576225 ...
	I0308 03:16:07.105767  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.105845  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.126571  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0308 03:16:07.127078  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.127697  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.127737  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.128076  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.128255  932462 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:07.130086  932462 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:07.130107  932462 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:07.130359  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.130392  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.145652  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0308 03:16:07.146086  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.146528  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.146550  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.146980  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.147206  932462 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:07.150113  932462 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:07.150661  932462 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:07.150696  932462 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:07.150865  932462 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:07.151275  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.151317  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.166338  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0308 03:16:07.166699  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.167128  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.167149  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.167499  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.167682  932462 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:07.167890  932462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:07.167912  932462 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:07.170556  932462 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:07.171042  932462 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:07.171070  932462 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:07.171219  932462 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:07.171381  932462 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:07.171541  932462 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:07.171716  932462 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:07.260656  932462 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:07.267775  932462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:07.284720  932462 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:07.284752  932462 api_server.go:166] Checking apiserver status ...
	I0308 03:16:07.284784  932462 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:07.308464  932462 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:07.321882  932462 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:07.321944  932462 ssh_runner.go:195] Run: ls
	I0308 03:16:07.327470  932462 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:07.331972  932462 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:07.331994  932462 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:07.332010  932462 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:07.332041  932462 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:07.332348  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.332391  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.348596  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0308 03:16:07.349024  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.349479  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.349499  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.349859  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.350088  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:07.351759  932462 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:16:07.351778  932462 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:07.352093  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.352145  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.366629  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35229
	I0308 03:16:07.367086  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.367564  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.367587  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.367885  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.368082  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:16:07.370671  932462 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:07.371139  932462 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:07.371166  932462 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:07.371320  932462 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:07.371620  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:07.371662  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:07.385930  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0308 03:16:07.386353  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:07.386787  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:07.386811  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:07.387554  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:07.388061  932462 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:16:07.388483  932462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:07.388554  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:16:07.392983  932462 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:07.393359  932462 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:07.393378  932462 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:07.393554  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:16:07.393738  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:16:07.393932  932462 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:16:07.394139  932462 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:16:08.177560  932462 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:08.177635  932462 retry.go:31] will retry after 226.537956ms: dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:11.477533  932462 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:11.477639  932462 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:16:11.477666  932462 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:11.477679  932462 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:16:11.477702  932462 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:11.477710  932462 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:11.478122  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.478194  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.494499  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0308 03:16:11.495027  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.495638  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.495662  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.496006  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.496227  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:11.497886  932462 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:11.497903  932462 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:11.498197  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.498272  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.512318  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0308 03:16:11.512759  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.513208  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.513229  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.513544  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.513729  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:11.516522  932462 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:11.516974  932462 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:11.517011  932462 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:11.517175  932462 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:11.517540  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.517587  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.532573  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0308 03:16:11.532932  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.533382  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.533407  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.533713  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.533923  932462 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:11.534095  932462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:11.534118  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:11.536792  932462 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:11.537196  932462 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:11.537229  932462 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:11.537406  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:11.537612  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:11.537755  932462 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:11.537875  932462 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:11.618028  932462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:11.637660  932462 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:11.637698  932462 api_server.go:166] Checking apiserver status ...
	I0308 03:16:11.637749  932462 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:11.653067  932462 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:11.663512  932462 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:11.663560  932462 ssh_runner.go:195] Run: ls
	I0308 03:16:11.668358  932462 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:11.675172  932462 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:11.675198  932462 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:11.675207  932462 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:11.675223  932462 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:11.675522  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.675555  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.691088  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
	I0308 03:16:11.691562  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.692133  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.692156  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.692536  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.692756  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:11.694584  932462 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:11.694607  932462 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:11.695013  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.695065  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.712061  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0308 03:16:11.712400  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.712872  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.712897  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.713298  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.713511  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:11.716352  932462 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:11.716770  932462 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:11.716790  932462 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:11.716938  932462 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:11.717208  932462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:11.717241  932462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:11.732062  932462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0308 03:16:11.732439  932462 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:11.732874  932462 main.go:141] libmachine: Using API Version  1
	I0308 03:16:11.732901  932462 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:11.733344  932462 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:11.733579  932462 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:11.733792  932462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:11.733814  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:11.736757  932462 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:11.737224  932462 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:11.737258  932462 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:11.737441  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:11.737642  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:11.737844  932462 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:11.738089  932462 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:11.821390  932462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:11.837058  932462 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (4.391681263s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:13.866265  932557 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:13.866374  932557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:13.866383  932557 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:13.866390  932557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:13.866616  932557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:13.866787  932557 out.go:298] Setting JSON to false
	I0308 03:16:13.866818  932557 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:13.866951  932557 notify.go:220] Checking for updates...
	I0308 03:16:13.867170  932557 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:13.867185  932557 status.go:255] checking status of ha-576225 ...
	I0308 03:16:13.867530  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:13.867590  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:13.883518  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0308 03:16:13.883970  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:13.884579  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:13.884615  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:13.885041  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:13.885225  932557 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:13.887062  932557 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:13.887086  932557 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:13.887446  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:13.887500  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:13.903373  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0308 03:16:13.903794  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:13.904313  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:13.904337  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:13.904681  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:13.904913  932557 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:13.907807  932557 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:13.908244  932557 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:13.908280  932557 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:13.908447  932557 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:13.908716  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:13.908758  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:13.923718  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0308 03:16:13.924157  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:13.924654  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:13.924679  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:13.925078  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:13.925294  932557 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:13.925496  932557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:13.925534  932557 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:13.928038  932557 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:13.928454  932557 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:13.928479  932557 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:13.928641  932557 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:13.928819  932557 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:13.928979  932557 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:13.929080  932557 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:14.023213  932557 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:14.030259  932557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:14.050281  932557 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:14.050307  932557 api_server.go:166] Checking apiserver status ...
	I0308 03:16:14.050340  932557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:14.065863  932557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:14.078370  932557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:14.078417  932557 ssh_runner.go:195] Run: ls
	I0308 03:16:14.083849  932557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:14.088857  932557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:14.088885  932557 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:14.088899  932557 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:14.088937  932557 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:14.089256  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:14.089310  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:14.107117  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I0308 03:16:14.107678  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:14.108165  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:14.108187  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:14.108626  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:14.108838  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:14.110721  932557 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:16:14.110741  932557 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:14.111211  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:14.111257  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:14.129530  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0308 03:16:14.129905  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:14.130334  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:14.130357  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:14.130779  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:14.131007  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:16:14.133720  932557 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:14.134225  932557 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:14.134257  932557 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:14.134416  932557 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:14.134807  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:14.134853  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:14.149955  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0308 03:16:14.150777  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:14.151405  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:14.151423  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:14.152078  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:14.152518  932557 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:16:14.152730  932557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:14.152757  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:16:14.155085  932557 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:14.155402  932557 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:14.155426  932557 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:14.155692  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:16:14.155893  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:16:14.156054  932557 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:16:14.156248  932557 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:16:14.545537  932557 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:14.545591  932557 retry.go:31] will retry after 236.516316ms: dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:17.841583  932557 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:17.841726  932557 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:16:17.841757  932557 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:17.841770  932557 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:16:17.841805  932557 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:17.841822  932557 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:17.842191  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:17.842264  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:17.858281  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45447
	I0308 03:16:17.858787  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:17.859349  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:17.859384  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:17.859797  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:17.860004  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:17.861834  932557 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:17.861863  932557 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:17.862169  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:17.862215  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:17.877708  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0308 03:16:17.878198  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:17.878689  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:17.878712  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:17.879045  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:17.879248  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:17.882053  932557 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:17.882430  932557 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:17.882456  932557 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:17.882618  932557 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:17.882904  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:17.882945  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:17.897460  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0308 03:16:17.897921  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:17.898376  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:17.898402  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:17.898770  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:17.898959  932557 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:17.899186  932557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:17.899217  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:17.901790  932557 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:17.902247  932557 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:17.902279  932557 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:17.902515  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:17.902693  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:17.902872  932557 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:17.903038  932557 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:17.985726  932557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:18.002685  932557 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:18.002718  932557 api_server.go:166] Checking apiserver status ...
	I0308 03:16:18.002770  932557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:18.018424  932557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:18.029538  932557 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:18.029582  932557 ssh_runner.go:195] Run: ls
	I0308 03:16:18.035300  932557 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:18.039827  932557 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:18.039849  932557 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:18.039858  932557 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:18.039873  932557 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:18.040221  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:18.040261  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:18.055500  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33267
	I0308 03:16:18.055968  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:18.056397  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:18.056420  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:18.056815  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:18.057034  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:18.058879  932557 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:18.058909  932557 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:18.059181  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:18.059220  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:18.073711  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0308 03:16:18.074159  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:18.074597  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:18.074619  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:18.074929  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:18.075118  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:18.078118  932557 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:18.078516  932557 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:18.078545  932557 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:18.078711  932557 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:18.079009  932557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:18.079044  932557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:18.093517  932557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0308 03:16:18.093939  932557 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:18.094371  932557 main.go:141] libmachine: Using API Version  1
	I0308 03:16:18.094394  932557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:18.094731  932557 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:18.094901  932557 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:18.095090  932557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:18.095125  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:18.097472  932557 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:18.097899  932557 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:18.097933  932557 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:18.098078  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:18.098275  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:18.098428  932557 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:18.098615  932557 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:18.181854  932557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:18.198872  932557 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (3.761966556s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:22.702759  932664 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:22.702898  932664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:22.702911  932664 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:22.702918  932664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:22.703126  932664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:22.703331  932664 out.go:298] Setting JSON to false
	I0308 03:16:22.703362  932664 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:22.703410  932664 notify.go:220] Checking for updates...
	I0308 03:16:22.703908  932664 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:22.703933  932664 status.go:255] checking status of ha-576225 ...
	I0308 03:16:22.704412  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.704478  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.723242  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0308 03:16:22.723694  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.724383  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.724423  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.724748  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.724954  932664 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:22.726587  932664 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:22.726610  932664 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:22.726875  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.726912  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.744800  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34687
	I0308 03:16:22.745196  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.745685  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.745709  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.746071  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.746249  932664 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:22.749313  932664 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:22.749735  932664 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:22.749772  932664 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:22.749932  932664 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:22.750337  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.750384  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.765035  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0308 03:16:22.765464  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.765913  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.765926  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.766214  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.766417  932664 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:22.766585  932664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:22.766613  932664 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:22.769150  932664 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:22.769578  932664 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:22.769605  932664 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:22.769751  932664 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:22.769928  932664 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:22.770085  932664 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:22.770206  932664 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:22.857360  932664 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:22.864666  932664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:22.879635  932664 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:22.879670  932664 api_server.go:166] Checking apiserver status ...
	I0308 03:16:22.879711  932664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:22.893651  932664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:22.906183  932664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:22.906240  932664 ssh_runner.go:195] Run: ls
	I0308 03:16:22.911787  932664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:22.919617  932664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:22.919641  932664 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:22.919655  932664 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:22.919690  932664 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:22.919985  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.920034  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.935144  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0308 03:16:22.935600  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.936106  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.936134  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.936494  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.936775  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:22.938501  932664 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:16:22.938524  932664 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:22.938839  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.938880  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.954940  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
	I0308 03:16:22.955350  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.955857  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.955882  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.956181  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.956391  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:16:22.959406  932664 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:22.959874  932664 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:22.959897  932664 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:22.960032  932664 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:16:22.960343  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:22.960377  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:22.975885  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35597
	I0308 03:16:22.976295  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:22.976805  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:22.976841  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:22.977153  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:22.977328  932664 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:16:22.977512  932664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:22.977534  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:16:22.980511  932664 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:22.980993  932664 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:16:22.981016  932664 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:16:22.981321  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:16:22.981541  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:16:22.981723  932664 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:16:22.981868  932664 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	W0308 03:16:26.033540  932664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.128:22: connect: no route to host
	W0308 03:16:26.033648  932664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0308 03:16:26.033672  932664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:26.033686  932664 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:16:26.033710  932664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	I0308 03:16:26.033750  932664 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:26.034752  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.034810  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.050852  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0308 03:16:26.051353  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.051835  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.051855  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.052240  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.052457  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:26.053982  932664 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:26.054007  932664 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:26.054376  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.054422  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.069485  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0308 03:16:26.069951  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.070431  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.070456  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.070756  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.070986  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:26.074017  932664 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:26.074566  932664 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:26.074596  932664 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:26.074745  932664 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:26.075069  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.075109  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.088903  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0308 03:16:26.089371  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.089860  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.089878  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.090243  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.090424  932664 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:26.090617  932664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:26.090641  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:26.093165  932664 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:26.093613  932664 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:26.093640  932664 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:26.093757  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:26.093920  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:26.094063  932664 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:26.094220  932664 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:26.183076  932664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:26.203534  932664 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:26.203566  932664 api_server.go:166] Checking apiserver status ...
	I0308 03:16:26.203603  932664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:26.220076  932664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:26.231189  932664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:26.231247  932664 ssh_runner.go:195] Run: ls
	I0308 03:16:26.237186  932664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:26.244115  932664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:26.244142  932664 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:26.244152  932664 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:26.244173  932664 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:26.244471  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.244518  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.262892  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0308 03:16:26.263309  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.263821  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.263856  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.264219  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.264413  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:26.266066  932664 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:26.266087  932664 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:26.266364  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.266399  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.280905  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0308 03:16:26.281287  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.281712  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.281735  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.282111  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.282324  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:26.284886  932664 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:26.285394  932664 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:26.285434  932664 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:26.285640  932664 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:26.285923  932664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:26.285957  932664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:26.300661  932664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0308 03:16:26.301013  932664 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:26.301477  932664 main.go:141] libmachine: Using API Version  1
	I0308 03:16:26.301510  932664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:26.301806  932664 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:26.301999  932664 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:26.302190  932664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:26.302209  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:26.304811  932664 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:26.305264  932664 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:26.305306  932664 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:26.305467  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:26.305723  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:26.305947  932664 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:26.306152  932664 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:26.389796  932664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:26.405902  932664 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 7 (679.102222ms)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:32.519781  932784 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:32.519963  932784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:32.519977  932784 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:32.519984  932784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:32.520286  932784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:32.520526  932784 out.go:298] Setting JSON to false
	I0308 03:16:32.520566  932784 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:32.520643  932784 notify.go:220] Checking for updates...
	I0308 03:16:32.521290  932784 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:32.521319  932784 status.go:255] checking status of ha-576225 ...
	I0308 03:16:32.521823  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.521875  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.540131  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35299
	I0308 03:16:32.540571  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.541308  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.541348  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.541724  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.541959  932784 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:32.543659  932784 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:32.543680  932784 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:32.543977  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.544013  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.558478  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36823
	I0308 03:16:32.558952  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.559406  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.559428  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.559750  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.559940  932784 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:32.562690  932784 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:32.563123  932784 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:32.563153  932784 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:32.563318  932784 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:32.563593  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.563630  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.578608  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40111
	I0308 03:16:32.579006  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.579401  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.579419  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.579742  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.579933  932784 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:32.580128  932784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:32.580173  932784 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:32.582602  932784 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:32.582975  932784 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:32.583016  932784 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:32.583115  932784 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:32.583326  932784 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:32.583516  932784 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:32.583667  932784 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:32.679249  932784 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:32.686814  932784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:32.709360  932784 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:32.709385  932784 api_server.go:166] Checking apiserver status ...
	I0308 03:16:32.709418  932784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:32.727271  932784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:32.738436  932784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:32.738497  932784 ssh_runner.go:195] Run: ls
	I0308 03:16:32.744355  932784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:32.749199  932784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:32.749222  932784 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:32.749232  932784 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:32.749262  932784 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:32.749665  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.749713  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.765500  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0308 03:16:32.766032  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.766558  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.766581  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.766971  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.767187  932784 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:32.773866  932784 status.go:330] ha-576225-m02 host status = "Stopped" (err=<nil>)
	I0308 03:16:32.773883  932784 status.go:343] host is not running, skipping remaining checks
	I0308 03:16:32.773889  932784 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:32.773907  932784 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:32.774342  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.774392  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.788779  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42677
	I0308 03:16:32.789301  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.789841  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.789867  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.790166  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.790394  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:32.791941  932784 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:32.791959  932784 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:32.792327  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.792371  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.806765  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0308 03:16:32.807216  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.807668  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.807691  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.807943  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.808167  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:32.811130  932784 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:32.811651  932784 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:32.811692  932784 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:32.811842  932784 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:32.812148  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.812184  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.827027  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0308 03:16:32.827480  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.828021  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.828044  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.828371  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.828597  932784 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:32.828813  932784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:32.828839  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:32.831693  932784 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:32.832161  932784 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:32.832208  932784 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:32.832335  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:32.832500  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:32.832676  932784 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:32.832796  932784 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:32.914603  932784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:32.933982  932784 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:32.934015  932784 api_server.go:166] Checking apiserver status ...
	I0308 03:16:32.934055  932784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:32.949809  932784 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:32.961005  932784 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:32.961057  932784 ssh_runner.go:195] Run: ls
	I0308 03:16:32.966795  932784 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:32.972233  932784 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:32.972254  932784 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:32.972263  932784 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:32.972279  932784 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:32.972625  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.972668  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:32.989843  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0308 03:16:32.990333  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:32.990846  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:32.990870  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:32.991281  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:32.991479  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:32.993044  932784 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:32.993060  932784 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:32.993393  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:32.993429  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:33.008123  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0308 03:16:33.008651  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:33.009134  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:33.009162  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:33.009525  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:33.009729  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:33.012856  932784 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:33.013435  932784 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:33.013464  932784 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:33.013624  932784 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:33.014038  932784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:33.014084  932784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:33.032063  932784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0308 03:16:33.032486  932784 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:33.032959  932784 main.go:141] libmachine: Using API Version  1
	I0308 03:16:33.032982  932784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:33.033377  932784 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:33.033591  932784 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:33.033795  932784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:33.033826  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:33.036891  932784 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:33.037238  932784 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:33.037255  932784 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:33.037452  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:33.037624  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:33.037767  932784 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:33.037903  932784 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:33.121158  932784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:33.138191  932784 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 7 (665.719716ms)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:41.029473  932874 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:41.029619  932874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:41.029630  932874 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:41.029636  932874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:41.029853  932874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:41.030063  932874 out.go:298] Setting JSON to false
	I0308 03:16:41.030095  932874 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:41.030153  932874 notify.go:220] Checking for updates...
	I0308 03:16:41.030621  932874 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:41.030654  932874 status.go:255] checking status of ha-576225 ...
	I0308 03:16:41.031199  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.031285  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.051441  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35439
	I0308 03:16:41.051888  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.052420  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.052442  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.052789  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.053004  932874 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:41.054745  932874 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:41.054763  932874 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:41.055052  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.055105  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.072140  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0308 03:16:41.072620  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.073253  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.073302  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.073720  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.073976  932874 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:41.077093  932874 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:41.077595  932874 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:41.077630  932874 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:41.077900  932874 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:41.078214  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.078265  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.093619  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46363
	I0308 03:16:41.094021  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.094507  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.094530  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.094870  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.095080  932874 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:41.095259  932874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:41.095293  932874 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:41.098428  932874 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:41.098905  932874 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:41.098938  932874 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:41.099059  932874 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:41.099265  932874 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:41.099399  932874 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:41.099526  932874 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:41.185682  932874 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:41.193032  932874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:41.209717  932874 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:41.209750  932874 api_server.go:166] Checking apiserver status ...
	I0308 03:16:41.209802  932874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:41.227178  932874 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:41.240010  932874 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:41.240049  932874 ssh_runner.go:195] Run: ls
	I0308 03:16:41.245573  932874 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:41.250362  932874 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:41.250386  932874 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:41.250398  932874 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:41.250426  932874 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:41.250775  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.250841  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.267025  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0308 03:16:41.267441  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.268009  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.268032  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.268381  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.268613  932874 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:41.270408  932874 status.go:330] ha-576225-m02 host status = "Stopped" (err=<nil>)
	I0308 03:16:41.270424  932874 status.go:343] host is not running, skipping remaining checks
	I0308 03:16:41.270432  932874 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:41.270454  932874 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:41.270742  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.270778  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.285716  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0308 03:16:41.286169  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.286596  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.286626  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.286917  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.287132  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:41.288633  932874 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:41.288650  932874 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:41.288954  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.288996  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.303858  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I0308 03:16:41.304216  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.304644  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.304670  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.305033  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.305230  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:41.308105  932874 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:41.308597  932874 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:41.308623  932874 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:41.308764  932874 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:41.309050  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.309086  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.323451  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0308 03:16:41.323814  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.324333  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.324358  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.324703  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.324878  932874 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:41.325064  932874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:41.325101  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:41.327854  932874 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:41.328325  932874 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:41.328346  932874 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:41.328517  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:41.328728  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:41.328871  932874 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:41.329045  932874 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:41.409964  932874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:41.427303  932874 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:41.427344  932874 api_server.go:166] Checking apiserver status ...
	I0308 03:16:41.427385  932874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:41.443026  932874 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:41.454498  932874 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:41.454550  932874 ssh_runner.go:195] Run: ls
	I0308 03:16:41.459390  932874 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:41.466681  932874 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:41.466701  932874 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:41.466710  932874 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:41.466726  932874 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:41.467011  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.467060  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.482448  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I0308 03:16:41.482908  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.483367  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.483387  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.483684  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.483887  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:41.485467  932874 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:41.485488  932874 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:41.485761  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.485795  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.502222  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I0308 03:16:41.502646  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.503124  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.503149  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.503522  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.503791  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:41.506743  932874 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:41.507220  932874 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:41.507247  932874 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:41.507461  932874 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:41.507806  932874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:41.507853  932874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:41.522921  932874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0308 03:16:41.523307  932874 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:41.523788  932874 main.go:141] libmachine: Using API Version  1
	I0308 03:16:41.523813  932874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:41.524181  932874 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:41.524421  932874 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:41.524626  932874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:41.524651  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:41.527748  932874 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:41.528213  932874 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:41.528234  932874 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:41.528449  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:41.528625  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:41.528826  932874 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:41.528963  932874 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:41.614632  932874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:41.631893  932874 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 7 (676.260137ms)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-576225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:49.353061  932970 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:49.353391  932970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:49.353402  932970 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:49.353408  932970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:49.353592  932970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:49.353808  932970 out.go:298] Setting JSON to false
	I0308 03:16:49.353846  932970 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:49.353977  932970 notify.go:220] Checking for updates...
	I0308 03:16:49.354311  932970 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:49.354332  932970 status.go:255] checking status of ha-576225 ...
	I0308 03:16:49.354754  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.354844  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.375294  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42131
	I0308 03:16:49.375830  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.376462  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.376495  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.376925  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.377117  932970 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:16:49.378828  932970 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:16:49.378845  932970 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:49.379156  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.379209  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.394603  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0308 03:16:49.395025  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.395438  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.395454  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.395793  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.395990  932970 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:16:49.398765  932970 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:49.399295  932970 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:49.399325  932970 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:49.399463  932970 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:16:49.399718  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.399756  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.414814  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0308 03:16:49.415156  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.415581  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.415612  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.415977  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.416200  932970 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:16:49.416392  932970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:49.416413  932970 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:16:49.419442  932970 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:49.419618  932970 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:16:49.419640  932970 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:16:49.419858  932970 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:16:49.420048  932970 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:16:49.420211  932970 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:16:49.420377  932970 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:16:49.511088  932970 ssh_runner.go:195] Run: systemctl --version
	I0308 03:16:49.517988  932970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:49.535277  932970 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:49.535302  932970 api_server.go:166] Checking apiserver status ...
	I0308 03:16:49.535331  932970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:49.551192  932970 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0308 03:16:49.564316  932970 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:49.564357  932970 ssh_runner.go:195] Run: ls
	I0308 03:16:49.570208  932970 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:49.578786  932970 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:49.578815  932970 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:16:49.578827  932970 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:49.578845  932970 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:16:49.579173  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.579218  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.594256  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0308 03:16:49.594684  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.595169  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.595201  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.595522  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.595772  932970 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:16:49.597522  932970 status.go:330] ha-576225-m02 host status = "Stopped" (err=<nil>)
	I0308 03:16:49.597539  932970 status.go:343] host is not running, skipping remaining checks
	I0308 03:16:49.597548  932970 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:49.597569  932970 status.go:255] checking status of ha-576225-m03 ...
	I0308 03:16:49.597854  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.597894  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.612501  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0308 03:16:49.612872  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.613340  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.613372  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.613744  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.613944  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:49.615598  932970 status.go:330] ha-576225-m03 host status = "Running" (err=<nil>)
	I0308 03:16:49.615617  932970 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:49.615952  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.615992  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.630551  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0308 03:16:49.630940  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.631381  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.631406  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.631786  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.632004  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:16:49.634975  932970 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:49.635400  932970 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:49.635428  932970 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:49.635593  932970 host.go:66] Checking if "ha-576225-m03" exists ...
	I0308 03:16:49.635907  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.635946  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.650511  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44061
	I0308 03:16:49.650872  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.651370  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.651396  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.651717  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.651929  932970 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:49.652153  932970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:49.652180  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:49.654926  932970 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:49.655390  932970 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:49.655413  932970 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:49.655485  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:49.655636  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:49.655779  932970 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:49.655915  932970 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:49.737821  932970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:49.756752  932970 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:16:49.756808  932970 api_server.go:166] Checking apiserver status ...
	I0308 03:16:49.756857  932970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:16:49.773647  932970 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	W0308 03:16:49.784314  932970 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:16:49.784362  932970 ssh_runner.go:195] Run: ls
	I0308 03:16:49.791817  932970 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:16:49.800262  932970 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:16:49.800282  932970 status.go:422] ha-576225-m03 apiserver status = Running (err=<nil>)
	I0308 03:16:49.800290  932970 status.go:257] ha-576225-m03 status: &{Name:ha-576225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:16:49.800307  932970 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:16:49.800629  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.800672  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.818802  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I0308 03:16:49.819211  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.819696  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.819721  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.820060  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.820294  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:49.822191  932970 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:16:49.822211  932970 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:49.822485  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.822518  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.838003  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0308 03:16:49.838381  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.838874  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.838910  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.839310  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.839570  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:16:49.842463  932970 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:49.842894  932970 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:49.842913  932970 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:49.843111  932970 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:16:49.843444  932970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:49.843490  932970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:49.857836  932970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
	I0308 03:16:49.858255  932970 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:49.858749  932970 main.go:141] libmachine: Using API Version  1
	I0308 03:16:49.858771  932970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:49.859112  932970 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:49.859337  932970 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:49.859533  932970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:16:49.859554  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:49.862350  932970 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:49.862822  932970 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:49.862849  932970 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:49.862984  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:49.863127  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:49.863232  932970 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:49.863403  932970 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:49.945397  932970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:16:49.962630  932970 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-576225 -n ha-576225
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-576225 logs -n 25: (1.507476404s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m03_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m04 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp testdata/cp-test.txt                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m04_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03:/home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m03 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-576225 node stop m02 -v=7                                                     | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-576225 node start m02 -v=7                                                    | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:08:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:08:40.294148  927850 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:08:40.294432  927850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:40.294442  927850 out.go:304] Setting ErrFile to fd 2...
	I0308 03:08:40.294446  927850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:40.294655  927850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:08:40.295228  927850 out.go:298] Setting JSON to false
	I0308 03:08:40.296765  927850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24646,"bootTime":1709842674,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:08:40.297169  927850 start.go:139] virtualization: kvm guest
	I0308 03:08:40.299379  927850 out.go:177] * [ha-576225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:08:40.300758  927850 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:08:40.300761  927850 notify.go:220] Checking for updates...
	I0308 03:08:40.302317  927850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:08:40.303647  927850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:08:40.304823  927850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.306071  927850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:08:40.307161  927850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:08:40.308668  927850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:08:40.342264  927850 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 03:08:40.343403  927850 start.go:297] selected driver: kvm2
	I0308 03:08:40.343420  927850 start.go:901] validating driver "kvm2" against <nil>
	I0308 03:08:40.343431  927850 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:08:40.344121  927850 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:08:40.344187  927850 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:08:40.358749  927850 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:08:40.358788  927850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 03:08:40.358971  927850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:08:40.359033  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:08:40.359045  927850 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0308 03:08:40.359052  927850 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 03:08:40.359094  927850 start.go:340] cluster config:
	{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0308 03:08:40.359180  927850 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:08:40.360860  927850 out.go:177] * Starting "ha-576225" primary control-plane node in "ha-576225" cluster
	I0308 03:08:40.362023  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:08:40.362051  927850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:08:40.362073  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:08:40.362157  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:08:40.362178  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:08:40.362468  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:08:40.362489  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json: {Name:mkd9a9e70b40bc7cf192b47a94c5105fab3566be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:08:40.362631  927850 start.go:360] acquireMachinesLock for ha-576225: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:08:40.362666  927850 start.go:364] duration metric: took 18.948µs to acquireMachinesLock for "ha-576225"
	I0308 03:08:40.362689  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:08:40.362746  927850 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 03:08:40.364354  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:08:40.364480  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:40.364528  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:40.377890  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0308 03:08:40.378281  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:40.378824  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:08:40.378847  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:40.379150  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:40.379348  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:08:40.379499  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:08:40.379652  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:08:40.379680  927850 client.go:168] LocalClient.Create starting
	I0308 03:08:40.379730  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:08:40.379773  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:08:40.379798  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:08:40.379867  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:08:40.379893  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:08:40.379914  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:08:40.379938  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:08:40.379951  927850 main.go:141] libmachine: (ha-576225) Calling .PreCreateCheck
	I0308 03:08:40.380245  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:08:40.380589  927850 main.go:141] libmachine: Creating machine...
	I0308 03:08:40.380602  927850 main.go:141] libmachine: (ha-576225) Calling .Create
	I0308 03:08:40.380732  927850 main.go:141] libmachine: (ha-576225) Creating KVM machine...
	I0308 03:08:40.381896  927850 main.go:141] libmachine: (ha-576225) DBG | found existing default KVM network
	I0308 03:08:40.382606  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.382480  927873 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0308 03:08:40.382640  927850 main.go:141] libmachine: (ha-576225) DBG | created network xml: 
	I0308 03:08:40.382661  927850 main.go:141] libmachine: (ha-576225) DBG | <network>
	I0308 03:08:40.382671  927850 main.go:141] libmachine: (ha-576225) DBG |   <name>mk-ha-576225</name>
	I0308 03:08:40.382690  927850 main.go:141] libmachine: (ha-576225) DBG |   <dns enable='no'/>
	I0308 03:08:40.382725  927850 main.go:141] libmachine: (ha-576225) DBG |   
	I0308 03:08:40.382751  927850 main.go:141] libmachine: (ha-576225) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 03:08:40.382824  927850 main.go:141] libmachine: (ha-576225) DBG |     <dhcp>
	I0308 03:08:40.382861  927850 main.go:141] libmachine: (ha-576225) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 03:08:40.382874  927850 main.go:141] libmachine: (ha-576225) DBG |     </dhcp>
	I0308 03:08:40.382885  927850 main.go:141] libmachine: (ha-576225) DBG |   </ip>
	I0308 03:08:40.382893  927850 main.go:141] libmachine: (ha-576225) DBG |   
	I0308 03:08:40.382900  927850 main.go:141] libmachine: (ha-576225) DBG | </network>
	I0308 03:08:40.382910  927850 main.go:141] libmachine: (ha-576225) DBG | 
	I0308 03:08:40.387482  927850 main.go:141] libmachine: (ha-576225) DBG | trying to create private KVM network mk-ha-576225 192.168.39.0/24...
	I0308 03:08:40.454041  927850 main.go:141] libmachine: (ha-576225) DBG | private KVM network mk-ha-576225 192.168.39.0/24 created
	I0308 03:08:40.454076  927850 main.go:141] libmachine: (ha-576225) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 ...
	I0308 03:08:40.454085  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.453973  927873 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.454100  927850 main.go:141] libmachine: (ha-576225) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:08:40.454239  927850 main.go:141] libmachine: (ha-576225) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:08:40.700284  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.700163  927873 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa...
	I0308 03:08:40.928145  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.928009  927873 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/ha-576225.rawdisk...
	I0308 03:08:40.928189  927850 main.go:141] libmachine: (ha-576225) DBG | Writing magic tar header
	I0308 03:08:40.928202  927850 main.go:141] libmachine: (ha-576225) DBG | Writing SSH key tar header
	I0308 03:08:40.928210  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:40.928128  927873 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 ...
	I0308 03:08:40.928225  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225
	I0308 03:08:40.928337  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:08:40.928367  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225 (perms=drwx------)
	I0308 03:08:40.928379  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:40.928392  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:08:40.928401  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:08:40.928412  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:08:40.928420  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:08:40.928427  927850 main.go:141] libmachine: (ha-576225) DBG | Checking permissions on dir: /home
	I0308 03:08:40.928431  927850 main.go:141] libmachine: (ha-576225) DBG | Skipping /home - not owner
	I0308 03:08:40.928444  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:08:40.928454  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:08:40.928480  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:08:40.928496  927850 main.go:141] libmachine: (ha-576225) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:08:40.928504  927850 main.go:141] libmachine: (ha-576225) Creating domain...
	I0308 03:08:40.929548  927850 main.go:141] libmachine: (ha-576225) define libvirt domain using xml: 
	I0308 03:08:40.929574  927850 main.go:141] libmachine: (ha-576225) <domain type='kvm'>
	I0308 03:08:40.929582  927850 main.go:141] libmachine: (ha-576225)   <name>ha-576225</name>
	I0308 03:08:40.929587  927850 main.go:141] libmachine: (ha-576225)   <memory unit='MiB'>2200</memory>
	I0308 03:08:40.929591  927850 main.go:141] libmachine: (ha-576225)   <vcpu>2</vcpu>
	I0308 03:08:40.929596  927850 main.go:141] libmachine: (ha-576225)   <features>
	I0308 03:08:40.929601  927850 main.go:141] libmachine: (ha-576225)     <acpi/>
	I0308 03:08:40.929604  927850 main.go:141] libmachine: (ha-576225)     <apic/>
	I0308 03:08:40.929611  927850 main.go:141] libmachine: (ha-576225)     <pae/>
	I0308 03:08:40.929631  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.929643  927850 main.go:141] libmachine: (ha-576225)   </features>
	I0308 03:08:40.929653  927850 main.go:141] libmachine: (ha-576225)   <cpu mode='host-passthrough'>
	I0308 03:08:40.929660  927850 main.go:141] libmachine: (ha-576225)   
	I0308 03:08:40.929668  927850 main.go:141] libmachine: (ha-576225)   </cpu>
	I0308 03:08:40.929672  927850 main.go:141] libmachine: (ha-576225)   <os>
	I0308 03:08:40.929677  927850 main.go:141] libmachine: (ha-576225)     <type>hvm</type>
	I0308 03:08:40.929693  927850 main.go:141] libmachine: (ha-576225)     <boot dev='cdrom'/>
	I0308 03:08:40.929702  927850 main.go:141] libmachine: (ha-576225)     <boot dev='hd'/>
	I0308 03:08:40.929706  927850 main.go:141] libmachine: (ha-576225)     <bootmenu enable='no'/>
	I0308 03:08:40.929710  927850 main.go:141] libmachine: (ha-576225)   </os>
	I0308 03:08:40.929714  927850 main.go:141] libmachine: (ha-576225)   <devices>
	I0308 03:08:40.929741  927850 main.go:141] libmachine: (ha-576225)     <disk type='file' device='cdrom'>
	I0308 03:08:40.929761  927850 main.go:141] libmachine: (ha-576225)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/boot2docker.iso'/>
	I0308 03:08:40.929768  927850 main.go:141] libmachine: (ha-576225)       <target dev='hdc' bus='scsi'/>
	I0308 03:08:40.929775  927850 main.go:141] libmachine: (ha-576225)       <readonly/>
	I0308 03:08:40.929780  927850 main.go:141] libmachine: (ha-576225)     </disk>
	I0308 03:08:40.929792  927850 main.go:141] libmachine: (ha-576225)     <disk type='file' device='disk'>
	I0308 03:08:40.929826  927850 main.go:141] libmachine: (ha-576225)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:08:40.929845  927850 main.go:141] libmachine: (ha-576225)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/ha-576225.rawdisk'/>
	I0308 03:08:40.929857  927850 main.go:141] libmachine: (ha-576225)       <target dev='hda' bus='virtio'/>
	I0308 03:08:40.929871  927850 main.go:141] libmachine: (ha-576225)     </disk>
	I0308 03:08:40.929881  927850 main.go:141] libmachine: (ha-576225)     <interface type='network'>
	I0308 03:08:40.929889  927850 main.go:141] libmachine: (ha-576225)       <source network='mk-ha-576225'/>
	I0308 03:08:40.929901  927850 main.go:141] libmachine: (ha-576225)       <model type='virtio'/>
	I0308 03:08:40.929913  927850 main.go:141] libmachine: (ha-576225)     </interface>
	I0308 03:08:40.929925  927850 main.go:141] libmachine: (ha-576225)     <interface type='network'>
	I0308 03:08:40.929936  927850 main.go:141] libmachine: (ha-576225)       <source network='default'/>
	I0308 03:08:40.929944  927850 main.go:141] libmachine: (ha-576225)       <model type='virtio'/>
	I0308 03:08:40.929954  927850 main.go:141] libmachine: (ha-576225)     </interface>
	I0308 03:08:40.929962  927850 main.go:141] libmachine: (ha-576225)     <serial type='pty'>
	I0308 03:08:40.929970  927850 main.go:141] libmachine: (ha-576225)       <target port='0'/>
	I0308 03:08:40.929976  927850 main.go:141] libmachine: (ha-576225)     </serial>
	I0308 03:08:40.929990  927850 main.go:141] libmachine: (ha-576225)     <console type='pty'>
	I0308 03:08:40.930003  927850 main.go:141] libmachine: (ha-576225)       <target type='serial' port='0'/>
	I0308 03:08:40.930011  927850 main.go:141] libmachine: (ha-576225)     </console>
	I0308 03:08:40.930022  927850 main.go:141] libmachine: (ha-576225)     <rng model='virtio'>
	I0308 03:08:40.930036  927850 main.go:141] libmachine: (ha-576225)       <backend model='random'>/dev/random</backend>
	I0308 03:08:40.930047  927850 main.go:141] libmachine: (ha-576225)     </rng>
	I0308 03:08:40.930061  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.930071  927850 main.go:141] libmachine: (ha-576225)     
	I0308 03:08:40.930075  927850 main.go:141] libmachine: (ha-576225)   </devices>
	I0308 03:08:40.930081  927850 main.go:141] libmachine: (ha-576225) </domain>
	I0308 03:08:40.930090  927850 main.go:141] libmachine: (ha-576225) 
	I0308 03:08:40.934388  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:7f:e5:ac in network default
	I0308 03:08:40.934976  927850 main.go:141] libmachine: (ha-576225) Ensuring networks are active...
	I0308 03:08:40.934993  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:40.935640  927850 main.go:141] libmachine: (ha-576225) Ensuring network default is active
	I0308 03:08:40.935909  927850 main.go:141] libmachine: (ha-576225) Ensuring network mk-ha-576225 is active
	I0308 03:08:40.936425  927850 main.go:141] libmachine: (ha-576225) Getting domain xml...
	I0308 03:08:40.937159  927850 main.go:141] libmachine: (ha-576225) Creating domain...
	I0308 03:08:42.113366  927850 main.go:141] libmachine: (ha-576225) Waiting to get IP...
	I0308 03:08:42.114368  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.114679  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.114763  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.114666  927873 retry.go:31] will retry after 273.842922ms: waiting for machine to come up
	I0308 03:08:42.390230  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.390677  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.390714  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.390617  927873 retry.go:31] will retry after 316.670928ms: waiting for machine to come up
	I0308 03:08:42.709075  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:42.709424  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:42.709448  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:42.709379  927873 retry.go:31] will retry after 360.008598ms: waiting for machine to come up
	I0308 03:08:43.070902  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:43.071307  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:43.071332  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:43.071253  927873 retry.go:31] will retry after 431.037924ms: waiting for machine to come up
	I0308 03:08:43.503994  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:43.504574  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:43.504607  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:43.504519  927873 retry.go:31] will retry after 566.141074ms: waiting for machine to come up
	I0308 03:08:44.072116  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:44.072547  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:44.072581  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:44.072470  927873 retry.go:31] will retry after 662.467797ms: waiting for machine to come up
	I0308 03:08:44.736295  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:44.736750  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:44.736807  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:44.736685  927873 retry.go:31] will retry after 1.071646339s: waiting for machine to come up
	I0308 03:08:45.809584  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:45.810090  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:45.810128  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:45.810010  927873 retry.go:31] will retry after 996.004199ms: waiting for machine to come up
	I0308 03:08:46.807198  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:46.807630  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:46.807657  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:46.807574  927873 retry.go:31] will retry after 1.343148181s: waiting for machine to come up
	I0308 03:08:48.153244  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:48.153633  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:48.153682  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:48.153592  927873 retry.go:31] will retry after 1.632548305s: waiting for machine to come up
	I0308 03:08:49.788450  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:49.788776  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:49.788811  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:49.788717  927873 retry.go:31] will retry after 2.584580251s: waiting for machine to come up
	I0308 03:08:52.376260  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:52.376718  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:52.376749  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:52.376669  927873 retry.go:31] will retry after 3.267198369s: waiting for machine to come up
	I0308 03:08:55.645730  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:08:55.646110  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:08:55.646135  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:08:55.646065  927873 retry.go:31] will retry after 4.457669923s: waiting for machine to come up
	I0308 03:09:00.108584  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:00.108992  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find current IP address of domain ha-576225 in network mk-ha-576225
	I0308 03:09:00.109043  927850 main.go:141] libmachine: (ha-576225) DBG | I0308 03:09:00.108951  927873 retry.go:31] will retry after 5.593586188s: waiting for machine to come up
	I0308 03:09:05.704430  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.704928  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has current primary IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.704954  927850 main.go:141] libmachine: (ha-576225) Found IP for machine: 192.168.39.251
	I0308 03:09:05.704965  927850 main.go:141] libmachine: (ha-576225) Reserving static IP address...
	I0308 03:09:05.705313  927850 main.go:141] libmachine: (ha-576225) DBG | unable to find host DHCP lease matching {name: "ha-576225", mac: "52:54:00:53:24:e8", ip: "192.168.39.251"} in network mk-ha-576225
	I0308 03:09:05.778257  927850 main.go:141] libmachine: (ha-576225) DBG | Getting to WaitForSSH function...
	I0308 03:09:05.778289  927850 main.go:141] libmachine: (ha-576225) Reserved static IP address: 192.168.39.251
	I0308 03:09:05.778303  927850 main.go:141] libmachine: (ha-576225) Waiting for SSH to be available...
	I0308 03:09:05.781259  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.781680  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:05.781715  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.781903  927850 main.go:141] libmachine: (ha-576225) DBG | Using SSH client type: external
	I0308 03:09:05.781925  927850 main.go:141] libmachine: (ha-576225) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa (-rw-------)
	I0308 03:09:05.781965  927850 main.go:141] libmachine: (ha-576225) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:09:05.781978  927850 main.go:141] libmachine: (ha-576225) DBG | About to run SSH command:
	I0308 03:09:05.781994  927850 main.go:141] libmachine: (ha-576225) DBG | exit 0
	I0308 03:09:05.913476  927850 main.go:141] libmachine: (ha-576225) DBG | SSH cmd err, output: <nil>: 
	I0308 03:09:05.913820  927850 main.go:141] libmachine: (ha-576225) KVM machine creation complete!
	I0308 03:09:05.914184  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:09:05.914781  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:05.915015  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:05.915182  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:09:05.915198  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:05.916542  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:09:05.916558  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:09:05.916565  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:09:05.916570  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:05.918725  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.919080  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:05.919108  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:05.919339  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:05.919509  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:05.919656  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:05.919803  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:05.919972  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:05.920202  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:05.920223  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:09:06.032577  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:09:06.032609  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:09:06.032617  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.035477  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.035904  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.035932  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.036059  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.036262  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.036427  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.036610  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.036778  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.036941  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.036951  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:09:06.150337  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:09:06.150450  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:09:06.150462  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:09:06.150470  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.150745  927850 buildroot.go:166] provisioning hostname "ha-576225"
	I0308 03:09:06.150783  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.151063  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.153980  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.154342  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.154373  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.154531  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.154718  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.154852  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.155037  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.155156  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.155350  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.155365  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225 && echo "ha-576225" | sudo tee /etc/hostname
	I0308 03:09:06.287120  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:09:06.287159  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.289949  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.290422  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.290452  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.290700  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.290921  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.291146  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.291325  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.291531  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.291725  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.291742  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:09:06.418818  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:09:06.418849  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:09:06.418870  927850 buildroot.go:174] setting up certificates
	I0308 03:09:06.418881  927850 provision.go:84] configureAuth start
	I0308 03:09:06.418890  927850 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:09:06.419232  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:06.422154  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.422513  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.422545  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.422700  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.424976  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.425269  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.425315  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.425534  927850 provision.go:143] copyHostCerts
	I0308 03:09:06.425569  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:09:06.425605  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:09:06.425617  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:09:06.425699  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:09:06.425812  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:09:06.425838  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:09:06.425848  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:09:06.425888  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:09:06.425965  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:09:06.425991  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:09:06.425997  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:09:06.426040  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:09:06.426124  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225 san=[127.0.0.1 192.168.39.251 ha-576225 localhost minikube]
	I0308 03:09:06.563215  927850 provision.go:177] copyRemoteCerts
	I0308 03:09:06.563277  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:09:06.563304  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.566083  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.566378  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.566417  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.566590  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.566787  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.566933  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.567064  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:06.657118  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:09:06.657192  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:09:06.683087  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:09:06.683142  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0308 03:09:06.711091  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:09:06.711162  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:09:06.738785  927850 provision.go:87] duration metric: took 319.889667ms to configureAuth
	I0308 03:09:06.738817  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:09:06.739048  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:06.739173  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:06.742419  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.742814  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:06.742840  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:06.743024  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:06.743222  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.743417  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:06.743594  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:06.743792  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:06.743974  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:06.743991  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:09:07.026099  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:09:07.026130  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:09:07.026141  927850 main.go:141] libmachine: (ha-576225) Calling .GetURL
	I0308 03:09:07.027584  927850 main.go:141] libmachine: (ha-576225) DBG | Using libvirt version 6000000
	I0308 03:09:07.029783  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.030120  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.030159  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.030282  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:09:07.030296  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:09:07.030304  927850 client.go:171] duration metric: took 26.650612846s to LocalClient.Create
	I0308 03:09:07.030326  927850 start.go:167] duration metric: took 26.650676556s to libmachine.API.Create "ha-576225"
	I0308 03:09:07.030337  927850 start.go:293] postStartSetup for "ha-576225" (driver="kvm2")
	I0308 03:09:07.030354  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:09:07.030378  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.030600  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:09:07.030631  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.032764  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.033037  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.033078  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.033184  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.033360  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.033518  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.033688  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.119876  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:09:07.124587  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:09:07.124611  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:09:07.124675  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:09:07.124763  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:09:07.124776  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:09:07.124895  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:09:07.134758  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:09:07.160668  927850 start.go:296] duration metric: took 130.315738ms for postStartSetup
	I0308 03:09:07.160722  927850 main.go:141] libmachine: (ha-576225) Calling .GetConfigRaw
	I0308 03:09:07.161344  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:07.163693  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.164044  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.164065  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.164324  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:07.164531  927850 start.go:128] duration metric: took 26.801774502s to createHost
	I0308 03:09:07.164555  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.167892  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.168313  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.168335  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.168518  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.168730  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.168897  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.169056  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.169236  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:09:07.169442  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:09:07.169466  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:09:07.286593  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867347.261695886
	
	I0308 03:09:07.286625  927850 fix.go:216] guest clock: 1709867347.261695886
	I0308 03:09:07.286633  927850 fix.go:229] Guest: 2024-03-08 03:09:07.261695886 +0000 UTC Remote: 2024-03-08 03:09:07.164543538 +0000 UTC m=+26.917482463 (delta=97.152348ms)
	I0308 03:09:07.286669  927850 fix.go:200] guest clock delta is within tolerance: 97.152348ms
	I0308 03:09:07.286675  927850 start.go:83] releasing machines lock for "ha-576225", held for 26.923998397s
	I0308 03:09:07.286704  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.287018  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:07.289734  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.290099  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.290123  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.290326  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.290885  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.291082  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:07.291163  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:09:07.291225  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.291363  927850 ssh_runner.go:195] Run: cat /version.json
	I0308 03:09:07.291393  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:07.294052  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294114  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294424  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.294449  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294475  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:07.294523  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:07.294623  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.294697  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:07.294798  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.294861  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:07.294935  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.294995  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:07.295059  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.295112  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:07.404478  927850 ssh_runner.go:195] Run: systemctl --version
	I0308 03:09:07.411098  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:09:07.575044  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:09:07.582025  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:09:07.582104  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:09:07.599648  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:09:07.599689  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:09:07.599763  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:09:07.623078  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:09:07.637158  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:09:07.637218  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:09:07.652360  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:09:07.666105  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:09:07.777782  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:09:07.914119  927850 docker.go:233] disabling docker service ...
	I0308 03:09:07.914214  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:09:07.930726  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:09:07.944752  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:09:08.080642  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:09:08.218262  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:09:08.233133  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:09:08.253229  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:09:08.253315  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.265163  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:09:08.265224  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.277025  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.288671  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:09:08.300359  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:09:08.312337  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:09:08.322998  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:09:08.323039  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:09:08.337192  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:09:08.347570  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:09:08.486444  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:09:08.623050  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:09:08.623156  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:09:08.628275  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:09:08.628333  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:09:08.632624  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:09:08.684740  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:09:08.684833  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:09:08.718558  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:09:08.749449  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:09:08.750921  927850 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:09:08.753779  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:08.754143  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:08.754169  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:08.754452  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:09:08.758783  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:09:08.772800  927850 kubeadm.go:877] updating cluster {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:09:08.772943  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:09:08.773010  927850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:09:08.805268  927850 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 03:09:08.805415  927850 ssh_runner.go:195] Run: which lz4
	I0308 03:09:08.809582  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0308 03:09:08.809663  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 03:09:08.814188  927850 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 03:09:08.814214  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 03:09:10.606710  927850 crio.go:444] duration metric: took 1.797037668s to copy over tarball
	I0308 03:09:10.606818  927850 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 03:09:13.297404  927850 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690545035s)
	I0308 03:09:13.297442  927850 crio.go:451] duration metric: took 2.690686272s to extract the tarball
	I0308 03:09:13.297450  927850 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 03:09:13.340681  927850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:09:13.392353  927850 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:09:13.392382  927850 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:09:13.392391  927850 kubeadm.go:928] updating node { 192.168.39.251 8443 v1.28.4 crio true true} ...
	I0308 03:09:13.392510  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:09:13.392584  927850 ssh_runner.go:195] Run: crio config
	I0308 03:09:13.449179  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:09:13.449203  927850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 03:09:13.449217  927850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:09:13.449245  927850 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-576225 NodeName:ha-576225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:09:13.449418  927850 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-576225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:09:13.449448  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:09:13.449514  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:09:13.449565  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:09:13.460945  927850 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:09:13.461004  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0308 03:09:13.472431  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0308 03:09:13.491557  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:09:13.509663  927850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 03:09:13.527724  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:09:13.546318  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:09:13.550750  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:09:13.564788  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:09:13.699617  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:09:13.717939  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.251
	I0308 03:09:13.717972  927850 certs.go:194] generating shared ca certs ...
	I0308 03:09:13.717994  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.718219  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:09:13.718292  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:09:13.718305  927850 certs.go:256] generating profile certs ...
	I0308 03:09:13.718379  927850 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:09:13.718395  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt with IP's: []
	I0308 03:09:13.849139  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt ...
	I0308 03:09:13.849182  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt: {Name:mk32536b65761539df07da1a79a6b1b5b790cbd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.849411  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key ...
	I0308 03:09:13.849433  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key: {Name:mk3231ee4f1f222e55be930cee3f99c59eaa3a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:13.849565  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201
	I0308 03:09:13.849583  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.254]
	I0308 03:09:14.060754  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 ...
	I0308 03:09:14.060785  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201: {Name:mk0e299082370d42c4949bed72be11ba90c5e095 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.060937  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201 ...
	I0308 03:09:14.060951  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201: {Name:mka44f34e7228ac2eee6a53ccb590b8ee666530d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.061019  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.4a289201 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:09:14.061123  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.4a289201 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:09:14.061190  927850 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:09:14.061205  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt with IP's: []
	I0308 03:09:14.216138  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt ...
	I0308 03:09:14.216175  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt: {Name:mk14538e3305db9cae733a63ff4ec9b8eb2791bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.216337  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key ...
	I0308 03:09:14.216348  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key: {Name:mk28be81ffe2f6fa87b5f077620b9fe69a4c031e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:14.216415  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:09:14.216432  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:09:14.216445  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:09:14.216459  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:09:14.216477  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:09:14.216490  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:09:14.216503  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:09:14.216515  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:09:14.216565  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:09:14.216614  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:09:14.216631  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:09:14.216659  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:09:14.216681  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:09:14.216701  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:09:14.216736  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:09:14.216766  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.216779  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.216791  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.217491  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:09:14.245691  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:09:14.273248  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:09:14.298581  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:09:14.326607  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 03:09:14.354862  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:09:14.380624  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:09:14.406394  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:09:14.432387  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:09:14.458966  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:09:14.495459  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:09:14.533510  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:09:14.551518  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:09:14.558017  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:09:14.570545  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.575977  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.576029  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:09:14.584707  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:09:14.599663  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:09:14.613198  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.618600  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.618665  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:09:14.625256  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:09:14.638153  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:09:14.650876  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.655776  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.655830  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:09:14.661980  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:09:14.674420  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:09:14.679001  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:09:14.679067  927850 kubeadm.go:391] StartCluster: {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:09:14.679181  927850 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:09:14.679258  927850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:09:14.722289  927850 cri.go:89] found id: ""
	I0308 03:09:14.722393  927850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 03:09:14.734302  927850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 03:09:14.745847  927850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 03:09:14.757140  927850 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 03:09:14.757155  927850 kubeadm.go:156] found existing configuration files:
	
	I0308 03:09:14.757196  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 03:09:14.768257  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 03:09:14.768321  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 03:09:14.779640  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 03:09:14.790416  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 03:09:14.790480  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 03:09:14.801216  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 03:09:14.811327  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 03:09:14.811378  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 03:09:14.822063  927850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 03:09:14.834338  927850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 03:09:14.834392  927850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 03:09:14.846562  927850 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 03:09:15.095206  927850 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 03:09:28.955594  927850 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 03:09:28.955648  927850 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 03:09:28.955761  927850 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 03:09:28.955923  927850 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 03:09:28.956098  927850 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 03:09:28.956183  927850 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 03:09:28.957604  927850 out.go:204]   - Generating certificates and keys ...
	I0308 03:09:28.957708  927850 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 03:09:28.957821  927850 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 03:09:28.957939  927850 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 03:09:28.958041  927850 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 03:09:28.958167  927850 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 03:09:28.958269  927850 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 03:09:28.958375  927850 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 03:09:28.958483  927850 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-576225 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0308 03:09:28.958536  927850 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 03:09:28.958677  927850 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-576225 localhost] and IPs [192.168.39.251 127.0.0.1 ::1]
	I0308 03:09:28.958746  927850 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 03:09:28.958810  927850 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 03:09:28.958865  927850 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 03:09:28.958957  927850 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 03:09:28.959020  927850 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 03:09:28.959068  927850 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 03:09:28.959163  927850 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 03:09:28.959249  927850 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 03:09:28.959353  927850 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 03:09:28.959443  927850 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 03:09:28.961878  927850 out.go:204]   - Booting up control plane ...
	I0308 03:09:28.961998  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 03:09:28.962109  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 03:09:28.962198  927850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 03:09:28.962341  927850 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 03:09:28.962454  927850 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 03:09:28.962508  927850 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 03:09:28.962714  927850 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 03:09:28.962846  927850 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.601132 seconds
	I0308 03:09:28.962996  927850 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 03:09:28.963183  927850 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 03:09:28.963274  927850 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 03:09:28.963497  927850 kubeadm.go:309] [mark-control-plane] Marking the node ha-576225 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 03:09:28.963568  927850 kubeadm.go:309] [bootstrap-token] Using token: ewomow.x8ox8qe7q1ouzoq2
	I0308 03:09:28.964909  927850 out.go:204]   - Configuring RBAC rules ...
	I0308 03:09:28.965016  927850 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 03:09:28.965115  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 03:09:28.965270  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 03:09:28.965482  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 03:09:28.965642  927850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 03:09:28.965769  927850 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 03:09:28.965929  927850 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 03:09:28.965998  927850 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 03:09:28.966059  927850 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 03:09:28.966069  927850 kubeadm.go:309] 
	I0308 03:09:28.966158  927850 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 03:09:28.966171  927850 kubeadm.go:309] 
	I0308 03:09:28.966288  927850 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 03:09:28.966297  927850 kubeadm.go:309] 
	I0308 03:09:28.966331  927850 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 03:09:28.966408  927850 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 03:09:28.966484  927850 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 03:09:28.966495  927850 kubeadm.go:309] 
	I0308 03:09:28.966577  927850 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 03:09:28.966588  927850 kubeadm.go:309] 
	I0308 03:09:28.966679  927850 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 03:09:28.966689  927850 kubeadm.go:309] 
	I0308 03:09:28.966762  927850 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 03:09:28.966878  927850 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 03:09:28.966981  927850 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 03:09:28.966990  927850 kubeadm.go:309] 
	I0308 03:09:28.967104  927850 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 03:09:28.967217  927850 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 03:09:28.967228  927850 kubeadm.go:309] 
	I0308 03:09:28.967339  927850 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ewomow.x8ox8qe7q1ouzoq2 \
	I0308 03:09:28.967486  927850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 03:09:28.967522  927850 kubeadm.go:309] 	--control-plane 
	I0308 03:09:28.967532  927850 kubeadm.go:309] 
	I0308 03:09:28.967657  927850 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 03:09:28.967670  927850 kubeadm.go:309] 
	I0308 03:09:28.967780  927850 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ewomow.x8ox8qe7q1ouzoq2 \
	I0308 03:09:28.967927  927850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 03:09:28.967943  927850 cni.go:84] Creating CNI manager for ""
	I0308 03:09:28.967954  927850 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 03:09:28.969403  927850 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 03:09:28.970646  927850 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 03:09:29.010907  927850 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 03:09:29.010936  927850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 03:09:29.071668  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 03:09:30.064811  927850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 03:09:30.064892  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:30.065003  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225 minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=true
	I0308 03:09:30.091925  927850 ops.go:34] apiserver oom_adj: -16
	I0308 03:09:30.228715  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:30.728864  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:31.229363  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:31.729021  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:32.229171  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:32.728880  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:33.228830  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:33.728877  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:34.228892  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:34.729404  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:35.229030  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:35.729365  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:36.229126  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:36.728881  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:37.229163  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:37.729163  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:38.229518  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 03:09:38.337792  927850 kubeadm.go:1106] duration metric: took 8.272957989s to wait for elevateKubeSystemPrivileges
	W0308 03:09:38.337837  927850 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 03:09:38.337848  927850 kubeadm.go:393] duration metric: took 23.658793696s to StartCluster
	I0308 03:09:38.337884  927850 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:38.337996  927850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:09:38.338950  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:09:38.339160  927850 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:09:38.339182  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:09:38.339184  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 03:09:38.339198  927850 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 03:09:38.339241  927850 addons.go:69] Setting storage-provisioner=true in profile "ha-576225"
	I0308 03:09:38.339266  927850 addons.go:234] Setting addon storage-provisioner=true in "ha-576225"
	I0308 03:09:38.339287  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:09:38.339268  927850 addons.go:69] Setting default-storageclass=true in profile "ha-576225"
	I0308 03:09:38.339349  927850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-576225"
	I0308 03:09:38.339499  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:38.339719  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.339747  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.339758  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.339779  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.355154  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
	I0308 03:09:38.355620  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.356205  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.356254  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.356607  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.356849  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.359067  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:09:38.359428  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 03:09:38.359628  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0308 03:09:38.360013  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.360077  927850 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 03:09:38.360343  927850 addons.go:234] Setting addon default-storageclass=true in "ha-576225"
	I0308 03:09:38.360390  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:09:38.360482  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.360504  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.360774  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.360822  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.360883  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.361565  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.361616  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.375778  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0308 03:09:38.375972  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0308 03:09:38.376248  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.376399  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.376878  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.376896  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.376900  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.376915  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.377258  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.377328  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.377468  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.377880  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:38.377943  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:38.379267  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:38.381063  927850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:09:38.382412  927850 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:09:38.382435  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 03:09:38.382454  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:38.385750  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.386243  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:38.386275  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.386390  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:38.386563  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:38.386753  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:38.386903  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:38.394422  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45939
	I0308 03:09:38.394804  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:38.395314  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:38.395346  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:38.395639  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:38.395797  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:09:38.397239  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:09:38.397469  927850 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 03:09:38.397486  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 03:09:38.397503  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:09:38.400081  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.400422  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:09:38.400442  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:09:38.400676  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:09:38.400860  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:09:38.401018  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:09:38.401171  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:09:38.567160  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 03:09:38.631601  927850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:09:38.633180  927850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 03:09:39.447931  927850 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0308 03:09:39.726952  927850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.093728831s)
	I0308 03:09:39.727025  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727039  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727104  927850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.095460452s)
	I0308 03:09:39.727170  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727182  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727377  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727407  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727421  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727434  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727448  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727454  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727460  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727469  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.727475  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.727704  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727732  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727744  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.727759  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.727763  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727794  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.727900  927850 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0308 03:09:39.727907  927850 round_trippers.go:469] Request Headers:
	I0308 03:09:39.727915  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:09:39.727917  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:09:39.765174  927850 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0308 03:09:39.766610  927850 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0308 03:09:39.766630  927850 round_trippers.go:469] Request Headers:
	I0308 03:09:39.766638  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:09:39.766642  927850 round_trippers.go:473]     Content-Type: application/json
	I0308 03:09:39.766644  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:09:39.777780  927850 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0308 03:09:39.778064  927850 main.go:141] libmachine: Making call to close driver server
	I0308 03:09:39.778094  927850 main.go:141] libmachine: (ha-576225) Calling .Close
	I0308 03:09:39.778395  927850 main.go:141] libmachine: (ha-576225) DBG | Closing plugin on server side
	I0308 03:09:39.778470  927850 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:09:39.778495  927850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:09:39.780185  927850 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 03:09:39.781498  927850 addons.go:505] duration metric: took 1.442301659s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 03:09:39.781543  927850 start.go:245] waiting for cluster config update ...
	I0308 03:09:39.781561  927850 start.go:254] writing updated cluster config ...
	I0308 03:09:39.783239  927850 out.go:177] 
	I0308 03:09:39.784625  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:09:39.784727  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:39.786321  927850 out.go:177] * Starting "ha-576225-m02" control-plane node in "ha-576225" cluster
	I0308 03:09:39.787575  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:09:39.787598  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:09:39.787690  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:09:39.787701  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:09:39.787764  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:09:39.787928  927850 start.go:360] acquireMachinesLock for ha-576225-m02: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:09:39.787970  927850 start.go:364] duration metric: took 23.713µs to acquireMachinesLock for "ha-576225-m02"
	I0308 03:09:39.787992  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:09:39.788057  927850 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0308 03:09:39.789998  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:09:39.790091  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:09:39.790127  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:09:39.806065  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0308 03:09:39.806664  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:09:39.807195  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:09:39.807228  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:09:39.807612  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:09:39.807835  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:09:39.807984  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:09:39.808166  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:09:39.808196  927850 client.go:168] LocalClient.Create starting
	I0308 03:09:39.808230  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:09:39.808279  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:09:39.808298  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:09:39.808373  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:09:39.808401  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:09:39.808417  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:09:39.808443  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:09:39.808455  927850 main.go:141] libmachine: (ha-576225-m02) Calling .PreCreateCheck
	I0308 03:09:39.808641  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:09:39.809039  927850 main.go:141] libmachine: Creating machine...
	I0308 03:09:39.809053  927850 main.go:141] libmachine: (ha-576225-m02) Calling .Create
	I0308 03:09:39.809191  927850 main.go:141] libmachine: (ha-576225-m02) Creating KVM machine...
	I0308 03:09:39.810615  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found existing default KVM network
	I0308 03:09:39.810719  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found existing private KVM network mk-ha-576225
	I0308 03:09:39.810857  927850 main.go:141] libmachine: (ha-576225-m02) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 ...
	I0308 03:09:39.810885  927850 main.go:141] libmachine: (ha-576225-m02) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:09:39.810964  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:39.810852  928212 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:09:39.811058  927850 main.go:141] libmachine: (ha-576225-m02) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:09:40.061605  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.061464  928212 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa...
	I0308 03:09:40.171537  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.171359  928212 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/ha-576225-m02.rawdisk...
	I0308 03:09:40.171597  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Writing magic tar header
	I0308 03:09:40.171615  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Writing SSH key tar header
	I0308 03:09:40.171639  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:40.171487  928212 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 ...
	I0308 03:09:40.171656  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02 (perms=drwx------)
	I0308 03:09:40.171676  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:09:40.171690  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02
	I0308 03:09:40.171712  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:09:40.171722  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:09:40.171769  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:09:40.171793  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:09:40.171805  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:09:40.171822  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:09:40.171836  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:09:40.171848  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:09:40.171881  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Checking permissions on dir: /home
	I0308 03:09:40.171897  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Skipping /home - not owner
	I0308 03:09:40.171916  927850 main.go:141] libmachine: (ha-576225-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:09:40.171927  927850 main.go:141] libmachine: (ha-576225-m02) Creating domain...
	I0308 03:09:40.172826  927850 main.go:141] libmachine: (ha-576225-m02) define libvirt domain using xml: 
	I0308 03:09:40.172849  927850 main.go:141] libmachine: (ha-576225-m02) <domain type='kvm'>
	I0308 03:09:40.172859  927850 main.go:141] libmachine: (ha-576225-m02)   <name>ha-576225-m02</name>
	I0308 03:09:40.172866  927850 main.go:141] libmachine: (ha-576225-m02)   <memory unit='MiB'>2200</memory>
	I0308 03:09:40.172875  927850 main.go:141] libmachine: (ha-576225-m02)   <vcpu>2</vcpu>
	I0308 03:09:40.172884  927850 main.go:141] libmachine: (ha-576225-m02)   <features>
	I0308 03:09:40.172889  927850 main.go:141] libmachine: (ha-576225-m02)     <acpi/>
	I0308 03:09:40.172894  927850 main.go:141] libmachine: (ha-576225-m02)     <apic/>
	I0308 03:09:40.172899  927850 main.go:141] libmachine: (ha-576225-m02)     <pae/>
	I0308 03:09:40.172905  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.172911  927850 main.go:141] libmachine: (ha-576225-m02)   </features>
	I0308 03:09:40.172918  927850 main.go:141] libmachine: (ha-576225-m02)   <cpu mode='host-passthrough'>
	I0308 03:09:40.172925  927850 main.go:141] libmachine: (ha-576225-m02)   
	I0308 03:09:40.172935  927850 main.go:141] libmachine: (ha-576225-m02)   </cpu>
	I0308 03:09:40.172959  927850 main.go:141] libmachine: (ha-576225-m02)   <os>
	I0308 03:09:40.172977  927850 main.go:141] libmachine: (ha-576225-m02)     <type>hvm</type>
	I0308 03:09:40.172983  927850 main.go:141] libmachine: (ha-576225-m02)     <boot dev='cdrom'/>
	I0308 03:09:40.172990  927850 main.go:141] libmachine: (ha-576225-m02)     <boot dev='hd'/>
	I0308 03:09:40.173025  927850 main.go:141] libmachine: (ha-576225-m02)     <bootmenu enable='no'/>
	I0308 03:09:40.173047  927850 main.go:141] libmachine: (ha-576225-m02)   </os>
	I0308 03:09:40.173058  927850 main.go:141] libmachine: (ha-576225-m02)   <devices>
	I0308 03:09:40.173072  927850 main.go:141] libmachine: (ha-576225-m02)     <disk type='file' device='cdrom'>
	I0308 03:09:40.173092  927850 main.go:141] libmachine: (ha-576225-m02)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/boot2docker.iso'/>
	I0308 03:09:40.173125  927850 main.go:141] libmachine: (ha-576225-m02)       <target dev='hdc' bus='scsi'/>
	I0308 03:09:40.173139  927850 main.go:141] libmachine: (ha-576225-m02)       <readonly/>
	I0308 03:09:40.173151  927850 main.go:141] libmachine: (ha-576225-m02)     </disk>
	I0308 03:09:40.173166  927850 main.go:141] libmachine: (ha-576225-m02)     <disk type='file' device='disk'>
	I0308 03:09:40.173207  927850 main.go:141] libmachine: (ha-576225-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:09:40.173226  927850 main.go:141] libmachine: (ha-576225-m02)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/ha-576225-m02.rawdisk'/>
	I0308 03:09:40.173239  927850 main.go:141] libmachine: (ha-576225-m02)       <target dev='hda' bus='virtio'/>
	I0308 03:09:40.173253  927850 main.go:141] libmachine: (ha-576225-m02)     </disk>
	I0308 03:09:40.173264  927850 main.go:141] libmachine: (ha-576225-m02)     <interface type='network'>
	I0308 03:09:40.173313  927850 main.go:141] libmachine: (ha-576225-m02)       <source network='mk-ha-576225'/>
	I0308 03:09:40.173339  927850 main.go:141] libmachine: (ha-576225-m02)       <model type='virtio'/>
	I0308 03:09:40.173351  927850 main.go:141] libmachine: (ha-576225-m02)     </interface>
	I0308 03:09:40.173370  927850 main.go:141] libmachine: (ha-576225-m02)     <interface type='network'>
	I0308 03:09:40.173408  927850 main.go:141] libmachine: (ha-576225-m02)       <source network='default'/>
	I0308 03:09:40.173432  927850 main.go:141] libmachine: (ha-576225-m02)       <model type='virtio'/>
	I0308 03:09:40.173455  927850 main.go:141] libmachine: (ha-576225-m02)     </interface>
	I0308 03:09:40.173475  927850 main.go:141] libmachine: (ha-576225-m02)     <serial type='pty'>
	I0308 03:09:40.173489  927850 main.go:141] libmachine: (ha-576225-m02)       <target port='0'/>
	I0308 03:09:40.173514  927850 main.go:141] libmachine: (ha-576225-m02)     </serial>
	I0308 03:09:40.173528  927850 main.go:141] libmachine: (ha-576225-m02)     <console type='pty'>
	I0308 03:09:40.173541  927850 main.go:141] libmachine: (ha-576225-m02)       <target type='serial' port='0'/>
	I0308 03:09:40.173554  927850 main.go:141] libmachine: (ha-576225-m02)     </console>
	I0308 03:09:40.173566  927850 main.go:141] libmachine: (ha-576225-m02)     <rng model='virtio'>
	I0308 03:09:40.173590  927850 main.go:141] libmachine: (ha-576225-m02)       <backend model='random'>/dev/random</backend>
	I0308 03:09:40.173603  927850 main.go:141] libmachine: (ha-576225-m02)     </rng>
	I0308 03:09:40.173614  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.173627  927850 main.go:141] libmachine: (ha-576225-m02)     
	I0308 03:09:40.173639  927850 main.go:141] libmachine: (ha-576225-m02)   </devices>
	I0308 03:09:40.173648  927850 main.go:141] libmachine: (ha-576225-m02) </domain>
	I0308 03:09:40.173659  927850 main.go:141] libmachine: (ha-576225-m02) 
	I0308 03:09:40.180658  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:25:bc:c5 in network default
	I0308 03:09:40.181358  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:40.181381  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring networks are active...
	I0308 03:09:40.182217  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring network default is active
	I0308 03:09:40.182609  927850 main.go:141] libmachine: (ha-576225-m02) Ensuring network mk-ha-576225 is active
	I0308 03:09:40.183053  927850 main.go:141] libmachine: (ha-576225-m02) Getting domain xml...
	I0308 03:09:40.183845  927850 main.go:141] libmachine: (ha-576225-m02) Creating domain...
	I0308 03:09:41.409071  927850 main.go:141] libmachine: (ha-576225-m02) Waiting to get IP...
	I0308 03:09:41.409950  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.410393  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.410425  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.410355  928212 retry.go:31] will retry after 236.493239ms: waiting for machine to come up
	I0308 03:09:41.648854  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.649310  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.649343  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.649236  928212 retry.go:31] will retry after 290.945002ms: waiting for machine to come up
	I0308 03:09:41.942049  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:41.942535  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:41.942574  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:41.942496  928212 retry.go:31] will retry after 446.637822ms: waiting for machine to come up
	I0308 03:09:42.391146  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:42.391602  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:42.391627  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:42.391553  928212 retry.go:31] will retry after 591.707727ms: waiting for machine to come up
	I0308 03:09:42.985370  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:42.985882  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:42.985918  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:42.985844  928212 retry.go:31] will retry after 572.398923ms: waiting for machine to come up
	I0308 03:09:43.559842  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:43.560465  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:43.560497  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:43.560418  928212 retry.go:31] will retry after 911.298328ms: waiting for machine to come up
	I0308 03:09:44.473019  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:44.473513  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:44.473546  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:44.473459  928212 retry.go:31] will retry after 1.130415745s: waiting for machine to come up
	I0308 03:09:45.605086  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:45.605606  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:45.605637  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:45.605561  928212 retry.go:31] will retry after 1.216381839s: waiting for machine to come up
	I0308 03:09:46.823962  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:46.824386  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:46.824428  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:46.824318  928212 retry.go:31] will retry after 1.299774618s: waiting for machine to come up
	I0308 03:09:48.125805  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:48.126236  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:48.126266  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:48.126175  928212 retry.go:31] will retry after 1.805876059s: waiting for machine to come up
	I0308 03:09:49.934160  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:49.934637  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:49.934669  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:49.934542  928212 retry.go:31] will retry after 2.221353292s: waiting for machine to come up
	I0308 03:09:52.158940  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:52.159290  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:52.159346  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:52.159227  928212 retry.go:31] will retry after 2.485920219s: waiting for machine to come up
	I0308 03:09:54.646384  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:54.646823  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:54.646852  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:54.646744  928212 retry.go:31] will retry after 3.903605035s: waiting for machine to come up
	I0308 03:09:58.556071  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:09:58.557077  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find current IP address of domain ha-576225-m02 in network mk-ha-576225
	I0308 03:09:58.557102  927850 main.go:141] libmachine: (ha-576225-m02) DBG | I0308 03:09:58.557039  928212 retry.go:31] will retry after 5.168694212s: waiting for machine to come up
	I0308 03:10:03.730530  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.731097  927850 main.go:141] libmachine: (ha-576225-m02) Found IP for machine: 192.168.39.128
	I0308 03:10:03.731124  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has current primary IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.731130  927850 main.go:141] libmachine: (ha-576225-m02) Reserving static IP address...
	I0308 03:10:03.731442  927850 main.go:141] libmachine: (ha-576225-m02) DBG | unable to find host DHCP lease matching {name: "ha-576225-m02", mac: "52:54:00:13:93:a0", ip: "192.168.39.128"} in network mk-ha-576225
	I0308 03:10:03.807303  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Getting to WaitForSSH function...
	I0308 03:10:03.807354  927850 main.go:141] libmachine: (ha-576225-m02) Reserved static IP address: 192.168.39.128
	I0308 03:10:03.807369  927850 main.go:141] libmachine: (ha-576225-m02) Waiting for SSH to be available...
	I0308 03:10:03.810205  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.810645  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:03.810681  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.810866  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using SSH client type: external
	I0308 03:10:03.810897  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa (-rw-------)
	I0308 03:10:03.810924  927850 main.go:141] libmachine: (ha-576225-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:10:03.810954  927850 main.go:141] libmachine: (ha-576225-m02) DBG | About to run SSH command:
	I0308 03:10:03.810972  927850 main.go:141] libmachine: (ha-576225-m02) DBG | exit 0
	I0308 03:10:03.937399  927850 main.go:141] libmachine: (ha-576225-m02) DBG | SSH cmd err, output: <nil>: 
	I0308 03:10:03.937692  927850 main.go:141] libmachine: (ha-576225-m02) KVM machine creation complete!
	I0308 03:10:03.937985  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:10:03.938616  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:03.938909  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:03.939102  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:10:03.939127  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:10:03.940444  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:10:03.940459  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:10:03.940467  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:10:03.940475  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:03.942977  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.943381  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:03.943410  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:03.943544  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:03.943773  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:03.943971  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:03.944088  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:03.944227  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:03.944518  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:03.944533  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:10:04.056763  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:10:04.056804  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:10:04.056816  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.059539  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.060000  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.060026  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.060220  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.060431  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.060639  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.060833  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.060999  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.061198  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.061212  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:10:04.174334  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:10:04.174469  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:10:04.174485  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:10:04.174495  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.174802  927850 buildroot.go:166] provisioning hostname "ha-576225-m02"
	I0308 03:10:04.174839  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.175101  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.177796  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.178188  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.178212  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.178381  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.178577  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.178758  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.178882  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.179048  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.179269  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.179296  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225-m02 && echo "ha-576225-m02" | sudo tee /etc/hostname
	I0308 03:10:04.307645  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225-m02
	
	I0308 03:10:04.307681  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.310639  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.311037  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.311071  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.311239  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.311470  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.311621  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.311777  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.311947  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.312162  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.312185  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:10:04.432237  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:10:04.432280  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:10:04.432326  927850 buildroot.go:174] setting up certificates
	I0308 03:10:04.432350  927850 provision.go:84] configureAuth start
	I0308 03:10:04.432370  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetMachineName
	I0308 03:10:04.432682  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:04.435463  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.435946  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.435972  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.436126  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.438265  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.438534  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.438579  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.438663  927850 provision.go:143] copyHostCerts
	I0308 03:10:04.438698  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:10:04.438744  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:10:04.438782  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:10:04.438878  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:10:04.438984  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:10:04.439014  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:10:04.439024  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:10:04.439065  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:10:04.439202  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:10:04.439228  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:10:04.439239  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:10:04.439282  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:10:04.439368  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225-m02 san=[127.0.0.1 192.168.39.128 ha-576225-m02 localhost minikube]
	I0308 03:10:04.539888  927850 provision.go:177] copyRemoteCerts
	I0308 03:10:04.539965  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:10:04.540000  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.542707  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.543093  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.543126  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.543310  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.543532  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.543706  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.543887  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:04.632170  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:10:04.632250  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:10:04.659155  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:10:04.659223  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 03:10:04.686224  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:10:04.686301  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:10:04.713436  927850 provision.go:87] duration metric: took 281.06682ms to configureAuth
	I0308 03:10:04.713465  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:10:04.713725  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:04.713826  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:04.716380  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.716812  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:04.716844  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:04.717039  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:04.717252  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.717478  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:04.717660  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:04.717816  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:04.718001  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:04.718018  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:10:04.998622  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:10:04.998654  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:10:04.998683  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetURL
	I0308 03:10:05.000139  927850 main.go:141] libmachine: (ha-576225-m02) DBG | Using libvirt version 6000000
	I0308 03:10:05.002291  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.002632  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.002667  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.002812  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:10:05.002828  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:10:05.002838  927850 client.go:171] duration metric: took 25.194633539s to LocalClient.Create
	I0308 03:10:05.002869  927850 start.go:167] duration metric: took 25.194706452s to libmachine.API.Create "ha-576225"
	I0308 03:10:05.002883  927850 start.go:293] postStartSetup for "ha-576225-m02" (driver="kvm2")
	I0308 03:10:05.002897  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:10:05.002933  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.003208  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:10:05.003238  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.005697  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.006069  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.006100  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.006233  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.006426  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.006618  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.006809  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.092684  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:10:05.097731  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:10:05.097767  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:10:05.097854  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:10:05.097956  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:10:05.097971  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:10:05.098068  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:10:05.109290  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:10:05.136257  927850 start.go:296] duration metric: took 133.359869ms for postStartSetup
	I0308 03:10:05.136308  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetConfigRaw
	I0308 03:10:05.136953  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:05.139714  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.140120  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.140157  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.140359  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:10:05.140540  927850 start.go:128] duration metric: took 25.352471686s to createHost
	I0308 03:10:05.140562  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.142815  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.143181  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.143213  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.143365  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.143541  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.143709  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.143869  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.144099  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:10:05.144317  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0308 03:10:05.144332  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:10:05.254384  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867405.226468429
	
	I0308 03:10:05.254421  927850 fix.go:216] guest clock: 1709867405.226468429
	I0308 03:10:05.254433  927850 fix.go:229] Guest: 2024-03-08 03:10:05.226468429 +0000 UTC Remote: 2024-03-08 03:10:05.14055208 +0000 UTC m=+84.893491005 (delta=85.916349ms)
	I0308 03:10:05.254457  927850 fix.go:200] guest clock delta is within tolerance: 85.916349ms
	I0308 03:10:05.254464  927850 start.go:83] releasing machines lock for "ha-576225-m02", held for 25.466484706s
	I0308 03:10:05.254490  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.254868  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:05.257667  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.258151  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.258186  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.260484  927850 out.go:177] * Found network options:
	I0308 03:10:05.261947  927850 out.go:177]   - NO_PROXY=192.168.39.251
	W0308 03:10:05.263198  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:10:05.263246  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.263770  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.263994  927850 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:10:05.264087  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:10:05.264129  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	W0308 03:10:05.264237  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:10:05.264350  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:10:05.264382  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:10:05.266761  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267094  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267133  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.267159  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267326  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.267452  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:05.267478  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:05.267532  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.267616  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:10:05.267704  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.267767  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:10:05.267848  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.267897  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:10:05.268026  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:10:05.519905  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:10:05.526939  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:10:05.527008  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:10:05.544584  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:10:05.544612  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:10:05.544695  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:10:05.563315  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:10:05.577946  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:10:05.578002  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:10:05.592325  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:10:05.607078  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:10:05.744285  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:10:05.935003  927850 docker.go:233] disabling docker service ...
	I0308 03:10:05.935087  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:10:05.951624  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:10:05.965500  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:10:06.096777  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:10:06.228954  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:10:06.244692  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:10:06.265652  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:10:06.265759  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.277177  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:10:06.277255  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.288480  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.299390  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:10:06.310274  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:10:06.321396  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:10:06.331343  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:10:06.331404  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:10:06.344851  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:10:06.354486  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:06.478622  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:10:06.625431  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:10:06.625522  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:10:06.630783  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:10:06.630850  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:10:06.635051  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:10:06.675945  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:10:06.676022  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:10:06.709412  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:10:06.740409  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:10:06.741755  927850 out.go:177]   - env NO_PROXY=192.168.39.251
	I0308 03:10:06.742884  927850 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:10:06.745660  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:06.745995  927850 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:09:55 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:10:06.746018  927850 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:10:06.746319  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:10:06.750763  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:10:06.764553  927850 mustload.go:65] Loading cluster: ha-576225
	I0308 03:10:06.764726  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:06.765007  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:06.765035  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:06.779680  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0308 03:10:06.780134  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:06.780636  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:06.780659  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:06.781019  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:06.781208  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:10:06.782879  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:10:06.783158  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:06.783189  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:06.797495  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I0308 03:10:06.797980  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:06.798455  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:06.798476  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:06.798773  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:06.798958  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:10:06.799106  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.128
	I0308 03:10:06.799125  927850 certs.go:194] generating shared ca certs ...
	I0308 03:10:06.799144  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:06.799270  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:10:06.799308  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:10:06.799319  927850 certs.go:256] generating profile certs ...
	I0308 03:10:06.799385  927850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:10:06.799410  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907
	I0308 03:10:06.799424  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.254]
	I0308 03:10:07.059503  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 ...
	I0308 03:10:07.059536  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907: {Name:mk4518f2838cb83538c6e1c972800ca0fb4818ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:07.059710  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907 ...
	I0308 03:10:07.059724  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907: {Name:mk8e30d5c74032633160373e582b2bd039ca9f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:10:07.059795  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.a7079907 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:10:07.059930  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.a7079907 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:10:07.060074  927850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:10:07.060092  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:10:07.060104  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:10:07.060118  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:10:07.060130  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:10:07.060146  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:10:07.060157  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:10:07.060167  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:10:07.060176  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:10:07.060223  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:10:07.060254  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:10:07.060264  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:10:07.060285  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:10:07.060307  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:10:07.060330  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:10:07.060366  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:10:07.060391  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.060404  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.060416  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.060450  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:10:07.063350  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:07.063811  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:10:07.063849  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:07.064050  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:10:07.064263  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:10:07.064441  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:10:07.064580  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:10:07.145650  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0308 03:10:07.151560  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0308 03:10:07.164450  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0308 03:10:07.169489  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0308 03:10:07.181017  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0308 03:10:07.189382  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0308 03:10:07.203506  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0308 03:10:07.208641  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0308 03:10:07.223311  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0308 03:10:07.228747  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0308 03:10:07.245859  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0308 03:10:07.250855  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0308 03:10:07.263944  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:10:07.295869  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:10:07.325852  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:10:07.353308  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:10:07.379736  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0308 03:10:07.406994  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:10:07.433332  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:10:07.460120  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:10:07.486225  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:10:07.511795  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:10:07.538188  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:10:07.569432  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0308 03:10:07.593900  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0308 03:10:07.612561  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0308 03:10:07.631212  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0308 03:10:07.649664  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0308 03:10:07.668302  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0308 03:10:07.686140  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0308 03:10:07.703806  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:10:07.709839  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:10:07.721289  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.726266  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.726324  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:10:07.732304  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:10:07.743882  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:10:07.755295  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.760186  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.760244  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:10:07.766107  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:10:07.777291  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:10:07.788697  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.794094  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.794151  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:10:07.800025  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:10:07.811159  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:10:07.815692  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:10:07.815757  927850 kubeadm.go:928] updating node {m02 192.168.39.128 8443 v1.28.4 crio true true} ...
	I0308 03:10:07.815879  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:10:07.815909  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:10:07.815936  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:10:07.815975  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:10:07.826637  927850 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0308 03:10:07.826683  927850 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0308 03:10:07.837110  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0308 03:10:07.837131  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:10:07.837188  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:10:07.837211  927850 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0308 03:10:07.837217  927850 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0308 03:10:07.841970  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 03:10:07.842000  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0308 03:10:08.978992  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:10:08.994317  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:10:08.994425  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:10:08.999797  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 03:10:08.999834  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0308 03:10:11.747758  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:10:11.747854  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:10:11.753769  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 03:10:11.753815  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0308 03:10:12.019264  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0308 03:10:12.031121  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0308 03:10:12.050893  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:10:12.070433  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:10:12.088453  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:10:12.092727  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:10:12.107128  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:12.255678  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:10:12.275756  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:10:12.276086  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:10:12.276114  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:10:12.291159  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0308 03:10:12.291608  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:10:12.292095  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:10:12.292121  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:10:12.292483  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:10:12.292697  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:10:12.292844  927850 start.go:316] joinCluster: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:10:12.292971  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 03:10:12.292992  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:10:12.296265  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:12.296708  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:10:12.296731  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:10:12.296911  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:10:12.297105  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:10:12.297270  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:10:12.297431  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:10:12.479835  927850 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:10:12.479894  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5tskky.r2mo3r85yoyvy2ry --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m02 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443"
	I0308 03:10:52.931150  927850 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5tskky.r2mo3r85yoyvy2ry --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m02 --control-plane --apiserver-advertise-address=192.168.39.128 --apiserver-bind-port=8443": (40.451218401s)
	I0308 03:10:52.931210  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 03:10:53.367922  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225-m02 minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=false
	I0308 03:10:53.491882  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-576225-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0308 03:10:53.623511  927850 start.go:318] duration metric: took 41.330661s to joinCluster
	I0308 03:10:53.623601  927850 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:10:53.625857  927850 out.go:177] * Verifying Kubernetes components...
	I0308 03:10:53.623924  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:10:53.627218  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:10:53.802553  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:10:53.820691  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:10:53.820977  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0308 03:10:53.821054  927850 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.251:8443
	I0308 03:10:53.821246  927850 node_ready.go:35] waiting up to 6m0s for node "ha-576225-m02" to be "Ready" ...
	I0308 03:10:53.821413  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:53.821422  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:53.821430  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:53.821433  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:53.831189  927850 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0308 03:10:54.322156  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:54.322178  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:54.322187  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:54.322191  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:54.328637  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:10:54.822207  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:54.822228  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:54.822237  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:54.822241  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:54.838652  927850 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0308 03:10:55.322040  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:55.322063  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:55.322072  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:55.322077  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:55.325920  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:55.821581  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:55.821608  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:55.821620  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:55.821625  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:55.825637  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:55.826162  927850 node_ready.go:53] node "ha-576225-m02" has status "Ready":"False"
	I0308 03:10:56.322524  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:56.322549  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:56.322558  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:56.322562  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:56.328145  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:10:56.821872  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:56.821895  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:56.821902  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:56.821906  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:56.824917  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:57.321946  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:57.321968  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:57.321976  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:57.321980  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:57.325866  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:57.821975  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:57.822006  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:57.822016  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:57.822020  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:57.825969  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:57.826687  927850 node_ready.go:53] node "ha-576225-m02" has status "Ready":"False"
	I0308 03:10:58.322118  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:58.322150  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:58.322159  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:58.322164  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:58.326088  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:58.821490  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:58.821516  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:58.821525  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:58.821529  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:58.825017  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.321723  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.321749  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.321761  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.321765  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.325337  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.821569  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.821590  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.821603  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.821612  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.825043  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.825842  927850 node_ready.go:49] node "ha-576225-m02" has status "Ready":"True"
	I0308 03:10:59.825863  927850 node_ready.go:38] duration metric: took 6.004571208s for node "ha-576225-m02" to be "Ready" ...
	I0308 03:10:59.825872  927850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:10:59.825969  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:10:59.825979  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.825987  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.825989  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.830823  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:10:59.836818  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.836894  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8qvhp
	I0308 03:10:59.836903  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.836910  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.836914  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.839755  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.840405  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.840423  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.840430  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.840434  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.842984  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.843708  927850 pod_ready.go:92] pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.843726  927850 pod_ready.go:81] duration metric: took 6.883358ms for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.843736  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.843790  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pqz96
	I0308 03:10:59.843801  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.843811  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.843835  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.846414  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.847169  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.847184  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.847190  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.847195  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.849510  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.850109  927850 pod_ready.go:92] pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.850132  927850 pod_ready.go:81] duration metric: took 6.388886ms for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.850144  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.850209  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225
	I0308 03:10:59.850220  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.850230  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.850236  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.852859  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.853417  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:10:59.853435  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.853441  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.853445  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.855891  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.856435  927850 pod_ready.go:92] pod "etcd-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.856449  927850 pod_ready.go:81] duration metric: took 6.293059ms for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.856457  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.856501  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m02
	I0308 03:10:59.856508  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.856515  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.856520  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.859426  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:10:59.860412  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:10:59.860427  927850 round_trippers.go:469] Request Headers:
	I0308 03:10:59.860433  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:10:59.860436  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:10:59.864403  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:10:59.865367  927850 pod_ready.go:92] pod "etcd-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:10:59.865382  927850 pod_ready.go:81] duration metric: took 8.919794ms for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:10:59.865394  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.021743  927850 request.go:629] Waited for 156.268188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:11:00.021814  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:11:00.021819  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.021827  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.021831  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.025408  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.222580  927850 request.go:629] Waited for 196.401798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:00.222643  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:00.222647  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.222655  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.222659  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.226600  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.227235  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:00.227260  927850 pod_ready.go:81] duration metric: took 361.860232ms for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.227270  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:00.422084  927850 request.go:629] Waited for 194.716633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.422176  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.422188  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.422202  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.422212  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.425971  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.621638  927850 request.go:629] Waited for 194.290053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:00.621699  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:00.621704  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.621712  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.621716  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.625381  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:00.822344  927850 request.go:629] Waited for 94.327919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.822409  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:00.822416  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:00.822429  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:00.822439  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:00.825937  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.022140  927850 request.go:629] Waited for 195.398128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.022243  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.022254  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.022264  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.022269  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.026179  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.228037  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:01.228067  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.228079  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.228085  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.231964  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:01.422114  927850 request.go:629] Waited for 189.353352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.422177  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.422182  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.422190  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.422194  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.426751  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:01.728306  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:01.728342  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.728351  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.728357  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.732820  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:01.822083  927850 request.go:629] Waited for 87.699265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.822163  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:01.822177  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:01.822188  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:01.822194  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:01.825541  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:02.227502  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:02.227526  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.227534  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.227538  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.230944  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:02.231604  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:02.231621  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.231628  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.231632  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.234638  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:02.235397  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"False"
	I0308 03:11:02.728289  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:02.728314  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.728322  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.728327  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.732588  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:02.733650  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:02.733669  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:02.733680  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:02.733687  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:02.736738  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:03.227801  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:03.227829  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.227840  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.227845  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.231929  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:03.232757  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:03.232770  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.232778  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.232781  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.236055  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:03.728016  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:03.728044  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.728056  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.728062  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.734893  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:11:03.735870  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:03.735887  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:03.735894  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:03.735900  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:03.738588  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:04.227593  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:04.227617  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.227626  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.227629  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.231236  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.232176  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:04.232192  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.232202  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.232209  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.235355  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.236206  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"False"
	I0308 03:11:04.727591  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:11:04.727625  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.727634  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.727639  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.731688  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:04.732340  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:04.732357  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.732364  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.732369  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.735759  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.736889  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:04.736910  927850 pod_ready.go:81] duration metric: took 4.509633326s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.736920  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.736979  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225
	I0308 03:11:04.736992  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.737000  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.737007  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.740014  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:11:04.740555  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:04.740570  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.740577  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.740581  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.744105  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:04.744907  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:04.744923  927850 pod_ready.go:81] duration metric: took 7.997063ms for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.744932  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:04.821885  927850 request.go:629] Waited for 76.877856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:11:04.821977  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:11:04.821990  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:04.822001  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:04.822006  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:04.826196  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.022297  927850 request.go:629] Waited for 195.390269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.022380  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.022386  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.022395  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.022401  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.026876  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.027839  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.027864  927850 pod_ready.go:81] duration metric: took 282.922993ms for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.027879  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.222351  927850 request.go:629] Waited for 194.381308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:11:05.222432  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:11:05.222437  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.222445  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.222462  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.226185  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.421873  927850 request.go:629] Waited for 194.774695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:05.421949  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:05.421958  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.421969  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.421977  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.426262  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:11:05.427124  927850 pod_ready.go:92] pod "kube-proxy-pcmj2" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.427147  927850 pod_ready.go:81] duration metric: took 399.259295ms for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.427158  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.622186  927850 request.go:629] Waited for 194.942273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:11:05.622304  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:11:05.622311  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.622342  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.622353  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.625978  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.822060  927850 request.go:629] Waited for 195.367031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.822152  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:05.822165  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:05.822176  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:05.822185  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:05.825261  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:05.826003  927850 pod_ready.go:92] pod "kube-proxy-vjfqv" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:05.826027  927850 pod_ready.go:81] duration metric: took 398.861018ms for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:05.826040  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.022230  927850 request.go:629] Waited for 196.09097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:11:06.022335  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:11:06.022346  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.022357  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.022368  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.025832  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.222037  927850 request.go:629] Waited for 195.346155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:06.222095  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:11:06.222099  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.222107  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.222111  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.225424  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.226317  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:06.226335  927850 pod_ready.go:81] duration metric: took 400.288016ms for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.226348  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.422432  927850 request.go:629] Waited for 195.999355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:11:06.422535  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:11:06.422541  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.422549  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.422556  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.426177  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.622310  927850 request.go:629] Waited for 195.382474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:06.622426  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:11:06.622443  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.622459  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.622465  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.625751  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.626462  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:11:06.626496  927850 pod_ready.go:81] duration metric: took 400.136357ms for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:11:06.626525  927850 pod_ready.go:38] duration metric: took 6.800614949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:11:06.626568  927850 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:11:06.626728  927850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:11:06.643192  927850 api_server.go:72] duration metric: took 13.019514528s to wait for apiserver process to appear ...
	I0308 03:11:06.643216  927850 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:11:06.643236  927850 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0308 03:11:06.648033  927850 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0308 03:11:06.648100  927850 round_trippers.go:463] GET https://192.168.39.251:8443/version
	I0308 03:11:06.648108  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.648115  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.648122  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.651408  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:06.651652  927850 api_server.go:141] control plane version: v1.28.4
	I0308 03:11:06.651674  927850 api_server.go:131] duration metric: took 8.45101ms to wait for apiserver health ...
	I0308 03:11:06.651683  927850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:11:06.822013  927850 request.go:629] Waited for 170.250813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:06.822139  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:06.822150  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:06.822159  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:06.822169  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:06.828581  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:11:06.835213  927850 system_pods.go:59] 17 kube-system pods found
	I0308 03:11:06.835245  927850 system_pods.go:61] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:11:06.835253  927850 system_pods.go:61] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:11:06.835259  927850 system_pods.go:61] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:11:06.835263  927850 system_pods.go:61] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:11:06.835268  927850 system_pods.go:61] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:11:06.835272  927850 system_pods.go:61] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:11:06.835277  927850 system_pods.go:61] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:11:06.835285  927850 system_pods.go:61] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:11:06.835291  927850 system_pods.go:61] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:11:06.835299  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:11:06.835305  927850 system_pods.go:61] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:11:06.835310  927850 system_pods.go:61] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:11:06.835321  927850 system_pods.go:61] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:11:06.835332  927850 system_pods.go:61] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:11:06.835336  927850 system_pods.go:61] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:11:06.835340  927850 system_pods.go:61] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:11:06.835344  927850 system_pods.go:61] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:11:06.835352  927850 system_pods.go:74] duration metric: took 183.663118ms to wait for pod list to return data ...
	I0308 03:11:06.835363  927850 default_sa.go:34] waiting for default service account to be created ...
	I0308 03:11:07.021668  927850 request.go:629] Waited for 186.212568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:11:07.021737  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:11:07.021745  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.021764  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.021775  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.025514  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:07.025881  927850 default_sa.go:45] found service account: "default"
	I0308 03:11:07.025910  927850 default_sa.go:55] duration metric: took 190.535225ms for default service account to be created ...
	I0308 03:11:07.025923  927850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 03:11:07.222137  927850 request.go:629] Waited for 196.091036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:07.222218  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:11:07.222226  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.222239  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.222248  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.241058  927850 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0308 03:11:07.246418  927850 system_pods.go:86] 17 kube-system pods found
	I0308 03:11:07.246447  927850 system_pods.go:89] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:11:07.246457  927850 system_pods.go:89] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:11:07.246462  927850 system_pods.go:89] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:11:07.246466  927850 system_pods.go:89] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:11:07.246470  927850 system_pods.go:89] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:11:07.246474  927850 system_pods.go:89] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:11:07.246478  927850 system_pods.go:89] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:11:07.246482  927850 system_pods.go:89] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:11:07.246486  927850 system_pods.go:89] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:11:07.246490  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:11:07.246495  927850 system_pods.go:89] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:11:07.246498  927850 system_pods.go:89] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:11:07.246505  927850 system_pods.go:89] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:11:07.246509  927850 system_pods.go:89] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:11:07.246513  927850 system_pods.go:89] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:11:07.246517  927850 system_pods.go:89] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:11:07.246523  927850 system_pods.go:89] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:11:07.246529  927850 system_pods.go:126] duration metric: took 220.600615ms to wait for k8s-apps to be running ...
	I0308 03:11:07.246546  927850 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 03:11:07.246593  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:11:07.266495  927850 system_svc.go:56] duration metric: took 19.940564ms WaitForService to wait for kubelet
	I0308 03:11:07.266530  927850 kubeadm.go:576] duration metric: took 13.642854924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:11:07.266554  927850 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:11:07.422263  927850 request.go:629] Waited for 155.593577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes
	I0308 03:11:07.422320  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes
	I0308 03:11:07.422325  927850 round_trippers.go:469] Request Headers:
	I0308 03:11:07.422332  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:11:07.422340  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:11:07.426232  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:11:07.427517  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:11:07.427553  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:11:07.427571  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:11:07.427577  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:11:07.427583  927850 node_conditions.go:105] duration metric: took 161.022579ms to run NodePressure ...
	I0308 03:11:07.427601  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:11:07.427632  927850 start.go:254] writing updated cluster config ...
	I0308 03:11:07.429792  927850 out.go:177] 
	I0308 03:11:07.431381  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:07.431517  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:07.433325  927850 out.go:177] * Starting "ha-576225-m03" control-plane node in "ha-576225" cluster
	I0308 03:11:07.434574  927850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:11:07.434598  927850 cache.go:56] Caching tarball of preloaded images
	I0308 03:11:07.434692  927850 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:11:07.434704  927850 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:11:07.434784  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:07.434982  927850 start.go:360] acquireMachinesLock for ha-576225-m03: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:11:07.435031  927850 start.go:364] duration metric: took 25.816µs to acquireMachinesLock for "ha-576225-m03"
	I0308 03:11:07.435050  927850 start.go:93] Provisioning new machine with config: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:11:07.435158  927850 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0308 03:11:07.437487  927850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:11:07.437569  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:07.437594  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:07.453229  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35035
	I0308 03:11:07.453625  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:07.454136  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:07.454166  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:07.454474  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:07.454676  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:07.454862  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:07.455027  927850 start.go:159] libmachine.API.Create for "ha-576225" (driver="kvm2")
	I0308 03:11:07.455052  927850 client.go:168] LocalClient.Create starting
	I0308 03:11:07.455098  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:11:07.455156  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:11:07.455179  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:11:07.455252  927850 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:11:07.455283  927850 main.go:141] libmachine: Decoding PEM data...
	I0308 03:11:07.455300  927850 main.go:141] libmachine: Parsing certificate...
	I0308 03:11:07.455326  927850 main.go:141] libmachine: Running pre-create checks...
	I0308 03:11:07.455338  927850 main.go:141] libmachine: (ha-576225-m03) Calling .PreCreateCheck
	I0308 03:11:07.455513  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:07.455903  927850 main.go:141] libmachine: Creating machine...
	I0308 03:11:07.455939  927850 main.go:141] libmachine: (ha-576225-m03) Calling .Create
	I0308 03:11:07.456058  927850 main.go:141] libmachine: (ha-576225-m03) Creating KVM machine...
	I0308 03:11:07.457294  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found existing default KVM network
	I0308 03:11:07.457440  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found existing private KVM network mk-ha-576225
	I0308 03:11:07.457580  927850 main.go:141] libmachine: (ha-576225-m03) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 ...
	I0308 03:11:07.457604  927850 main.go:141] libmachine: (ha-576225-m03) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:11:07.457669  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.457559  928590 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:11:07.457758  927850 main.go:141] libmachine: (ha-576225-m03) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:11:07.705383  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.705216  928590 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa...
	I0308 03:11:07.778475  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.778328  928590 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/ha-576225-m03.rawdisk...
	I0308 03:11:07.778529  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Writing magic tar header
	I0308 03:11:07.778548  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Writing SSH key tar header
	I0308 03:11:07.778561  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:07.778499  928590 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 ...
	I0308 03:11:07.778721  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03
	I0308 03:11:07.778756  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:11:07.778773  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03 (perms=drwx------)
	I0308 03:11:07.778786  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:11:07.778793  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:11:07.778801  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:11:07.778812  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:11:07.778835  927850 main.go:141] libmachine: (ha-576225-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:11:07.778850  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:11:07.778860  927850 main.go:141] libmachine: (ha-576225-m03) Creating domain...
	I0308 03:11:07.778871  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:11:07.778886  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:11:07.778903  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:11:07.778916  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Checking permissions on dir: /home
	I0308 03:11:07.778927  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Skipping /home - not owner
	I0308 03:11:07.779938  927850 main.go:141] libmachine: (ha-576225-m03) define libvirt domain using xml: 
	I0308 03:11:07.779969  927850 main.go:141] libmachine: (ha-576225-m03) <domain type='kvm'>
	I0308 03:11:07.779981  927850 main.go:141] libmachine: (ha-576225-m03)   <name>ha-576225-m03</name>
	I0308 03:11:07.779994  927850 main.go:141] libmachine: (ha-576225-m03)   <memory unit='MiB'>2200</memory>
	I0308 03:11:07.780004  927850 main.go:141] libmachine: (ha-576225-m03)   <vcpu>2</vcpu>
	I0308 03:11:07.780015  927850 main.go:141] libmachine: (ha-576225-m03)   <features>
	I0308 03:11:07.780029  927850 main.go:141] libmachine: (ha-576225-m03)     <acpi/>
	I0308 03:11:07.780040  927850 main.go:141] libmachine: (ha-576225-m03)     <apic/>
	I0308 03:11:07.780053  927850 main.go:141] libmachine: (ha-576225-m03)     <pae/>
	I0308 03:11:07.780064  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780077  927850 main.go:141] libmachine: (ha-576225-m03)   </features>
	I0308 03:11:07.780093  927850 main.go:141] libmachine: (ha-576225-m03)   <cpu mode='host-passthrough'>
	I0308 03:11:07.780101  927850 main.go:141] libmachine: (ha-576225-m03)   
	I0308 03:11:07.780110  927850 main.go:141] libmachine: (ha-576225-m03)   </cpu>
	I0308 03:11:07.780118  927850 main.go:141] libmachine: (ha-576225-m03)   <os>
	I0308 03:11:07.780128  927850 main.go:141] libmachine: (ha-576225-m03)     <type>hvm</type>
	I0308 03:11:07.780140  927850 main.go:141] libmachine: (ha-576225-m03)     <boot dev='cdrom'/>
	I0308 03:11:07.780154  927850 main.go:141] libmachine: (ha-576225-m03)     <boot dev='hd'/>
	I0308 03:11:07.780173  927850 main.go:141] libmachine: (ha-576225-m03)     <bootmenu enable='no'/>
	I0308 03:11:07.780186  927850 main.go:141] libmachine: (ha-576225-m03)   </os>
	I0308 03:11:07.780197  927850 main.go:141] libmachine: (ha-576225-m03)   <devices>
	I0308 03:11:07.780208  927850 main.go:141] libmachine: (ha-576225-m03)     <disk type='file' device='cdrom'>
	I0308 03:11:07.780223  927850 main.go:141] libmachine: (ha-576225-m03)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/boot2docker.iso'/>
	I0308 03:11:07.780237  927850 main.go:141] libmachine: (ha-576225-m03)       <target dev='hdc' bus='scsi'/>
	I0308 03:11:07.780254  927850 main.go:141] libmachine: (ha-576225-m03)       <readonly/>
	I0308 03:11:07.780268  927850 main.go:141] libmachine: (ha-576225-m03)     </disk>
	I0308 03:11:07.780297  927850 main.go:141] libmachine: (ha-576225-m03)     <disk type='file' device='disk'>
	I0308 03:11:07.780315  927850 main.go:141] libmachine: (ha-576225-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:11:07.780335  927850 main.go:141] libmachine: (ha-576225-m03)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/ha-576225-m03.rawdisk'/>
	I0308 03:11:07.780350  927850 main.go:141] libmachine: (ha-576225-m03)       <target dev='hda' bus='virtio'/>
	I0308 03:11:07.780362  927850 main.go:141] libmachine: (ha-576225-m03)     </disk>
	I0308 03:11:07.780377  927850 main.go:141] libmachine: (ha-576225-m03)     <interface type='network'>
	I0308 03:11:07.780390  927850 main.go:141] libmachine: (ha-576225-m03)       <source network='mk-ha-576225'/>
	I0308 03:11:07.780403  927850 main.go:141] libmachine: (ha-576225-m03)       <model type='virtio'/>
	I0308 03:11:07.780427  927850 main.go:141] libmachine: (ha-576225-m03)     </interface>
	I0308 03:11:07.780449  927850 main.go:141] libmachine: (ha-576225-m03)     <interface type='network'>
	I0308 03:11:07.780462  927850 main.go:141] libmachine: (ha-576225-m03)       <source network='default'/>
	I0308 03:11:07.780474  927850 main.go:141] libmachine: (ha-576225-m03)       <model type='virtio'/>
	I0308 03:11:07.780485  927850 main.go:141] libmachine: (ha-576225-m03)     </interface>
	I0308 03:11:07.780495  927850 main.go:141] libmachine: (ha-576225-m03)     <serial type='pty'>
	I0308 03:11:07.780510  927850 main.go:141] libmachine: (ha-576225-m03)       <target port='0'/>
	I0308 03:11:07.780524  927850 main.go:141] libmachine: (ha-576225-m03)     </serial>
	I0308 03:11:07.780534  927850 main.go:141] libmachine: (ha-576225-m03)     <console type='pty'>
	I0308 03:11:07.780545  927850 main.go:141] libmachine: (ha-576225-m03)       <target type='serial' port='0'/>
	I0308 03:11:07.780556  927850 main.go:141] libmachine: (ha-576225-m03)     </console>
	I0308 03:11:07.780567  927850 main.go:141] libmachine: (ha-576225-m03)     <rng model='virtio'>
	I0308 03:11:07.780583  927850 main.go:141] libmachine: (ha-576225-m03)       <backend model='random'>/dev/random</backend>
	I0308 03:11:07.780597  927850 main.go:141] libmachine: (ha-576225-m03)     </rng>
	I0308 03:11:07.780609  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780618  927850 main.go:141] libmachine: (ha-576225-m03)     
	I0308 03:11:07.780626  927850 main.go:141] libmachine: (ha-576225-m03)   </devices>
	I0308 03:11:07.780636  927850 main.go:141] libmachine: (ha-576225-m03) </domain>
	I0308 03:11:07.780644  927850 main.go:141] libmachine: (ha-576225-m03) 
	I0308 03:11:07.787525  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:5a:cf:77 in network default
	I0308 03:11:07.788496  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:07.788514  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring networks are active...
	I0308 03:11:07.789377  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring network default is active
	I0308 03:11:07.789748  927850 main.go:141] libmachine: (ha-576225-m03) Ensuring network mk-ha-576225 is active
	I0308 03:11:07.790211  927850 main.go:141] libmachine: (ha-576225-m03) Getting domain xml...
	I0308 03:11:07.791003  927850 main.go:141] libmachine: (ha-576225-m03) Creating domain...
	I0308 03:11:09.000039  927850 main.go:141] libmachine: (ha-576225-m03) Waiting to get IP...
	I0308 03:11:09.000875  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.001266  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.001330  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.001258  928590 retry.go:31] will retry after 216.744664ms: waiting for machine to come up
	I0308 03:11:09.220137  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.220744  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.220799  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.220673  928590 retry.go:31] will retry after 344.32551ms: waiting for machine to come up
	I0308 03:11:09.566272  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.566783  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.566814  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.566721  928590 retry.go:31] will retry after 418.834054ms: waiting for machine to come up
	I0308 03:11:09.987101  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:09.987623  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:09.987654  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:09.987563  928590 retry.go:31] will retry after 368.096971ms: waiting for machine to come up
	I0308 03:11:10.357008  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:10.357499  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:10.357525  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:10.357447  928590 retry.go:31] will retry after 735.02061ms: waiting for machine to come up
	I0308 03:11:11.094424  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:11.094943  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:11.094976  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:11.094880  928590 retry.go:31] will retry after 803.752614ms: waiting for machine to come up
	I0308 03:11:11.900117  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:11.900627  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:11.900655  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:11.900567  928590 retry.go:31] will retry after 853.28583ms: waiting for machine to come up
	I0308 03:11:12.755426  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:12.755964  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:12.756037  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:12.755952  928590 retry.go:31] will retry after 1.409037774s: waiting for machine to come up
	I0308 03:11:14.166667  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:14.167183  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:14.167236  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:14.167106  928590 retry.go:31] will retry after 1.591994181s: waiting for machine to come up
	I0308 03:11:15.760930  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:15.761465  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:15.761493  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:15.761405  928590 retry.go:31] will retry after 1.956770276s: waiting for machine to come up
	I0308 03:11:17.720344  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:17.720835  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:17.720859  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:17.720808  928590 retry.go:31] will retry after 2.308480723s: waiting for machine to come up
	I0308 03:11:20.030491  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:20.030991  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:20.031022  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:20.030944  928590 retry.go:31] will retry after 2.597182441s: waiting for machine to come up
	I0308 03:11:22.629604  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:22.630066  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:22.630089  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:22.630013  928590 retry.go:31] will retry after 4.489691082s: waiting for machine to come up
	I0308 03:11:27.123686  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:27.124120  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find current IP address of domain ha-576225-m03 in network mk-ha-576225
	I0308 03:11:27.124139  927850 main.go:141] libmachine: (ha-576225-m03) DBG | I0308 03:11:27.124081  928590 retry.go:31] will retry after 3.754931444s: waiting for machine to come up
	I0308 03:11:30.882410  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.883076  927850 main.go:141] libmachine: (ha-576225-m03) Found IP for machine: 192.168.39.17
	I0308 03:11:30.883097  927850 main.go:141] libmachine: (ha-576225-m03) Reserving static IP address...
	I0308 03:11:30.883107  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has current primary IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.883708  927850 main.go:141] libmachine: (ha-576225-m03) DBG | unable to find host DHCP lease matching {name: "ha-576225-m03", mac: "52:54:00:e1:8f:ef", ip: "192.168.39.17"} in network mk-ha-576225
	I0308 03:11:30.959126  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Getting to WaitForSSH function...
	I0308 03:11:30.959170  927850 main.go:141] libmachine: (ha-576225-m03) Reserved static IP address: 192.168.39.17
	I0308 03:11:30.959182  927850 main.go:141] libmachine: (ha-576225-m03) Waiting for SSH to be available...
	I0308 03:11:30.962115  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.962668  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:30.962694  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:30.962923  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using SSH client type: external
	I0308 03:11:30.962945  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa (-rw-------)
	I0308 03:11:30.962970  927850 main.go:141] libmachine: (ha-576225-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:11:30.962984  927850 main.go:141] libmachine: (ha-576225-m03) DBG | About to run SSH command:
	I0308 03:11:30.963002  927850 main.go:141] libmachine: (ha-576225-m03) DBG | exit 0
	I0308 03:11:31.089401  927850 main.go:141] libmachine: (ha-576225-m03) DBG | SSH cmd err, output: <nil>: 
	I0308 03:11:31.089707  927850 main.go:141] libmachine: (ha-576225-m03) KVM machine creation complete!
	I0308 03:11:31.090110  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:31.090881  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:31.091116  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:31.091322  927850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:11:31.091340  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:11:31.092835  927850 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:11:31.092851  927850 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:11:31.092859  927850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:11:31.092868  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.095343  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.095733  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.095764  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.095907  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.096070  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.096240  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.096398  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.096647  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.096936  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.096953  927850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:11:31.201096  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:11:31.201125  927850 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:11:31.201133  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.204396  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.204790  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.204829  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.204971  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.205195  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.205402  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.205549  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.205729  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.205900  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.205913  927850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:11:31.311129  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:11:31.311255  927850 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:11:31.311277  927850 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:11:31.311290  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.311591  927850 buildroot.go:166] provisioning hostname "ha-576225-m03"
	I0308 03:11:31.311624  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.311842  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.314524  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.314965  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.314987  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.315176  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.315383  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.315558  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.315724  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.315904  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.316067  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.316079  927850 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225-m03 && echo "ha-576225-m03" | sudo tee /etc/hostname
	I0308 03:11:31.433376  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225-m03
	
	I0308 03:11:31.433407  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.436250  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.436767  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.436799  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.436969  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:31.437218  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.437428  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:31.437604  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:31.437836  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:31.438010  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:31.438033  927850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:11:31.553621  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:11:31.553655  927850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:11:31.553678  927850 buildroot.go:174] setting up certificates
	I0308 03:11:31.553692  927850 provision.go:84] configureAuth start
	I0308 03:11:31.553706  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetMachineName
	I0308 03:11:31.554061  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:31.556667  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.557080  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.557122  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.557329  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:31.559741  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.560035  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:31.560066  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:31.560184  927850 provision.go:143] copyHostCerts
	I0308 03:11:31.560224  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:11:31.560268  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:11:31.560277  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:11:31.560370  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:11:31.560475  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:11:31.560504  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:11:31.560517  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:11:31.560555  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:11:31.560627  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:11:31.560647  927850 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:11:31.560654  927850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:11:31.560677  927850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:11:31.560729  927850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225-m03 san=[127.0.0.1 192.168.39.17 ha-576225-m03 localhost minikube]
	I0308 03:11:32.027224  927850 provision.go:177] copyRemoteCerts
	I0308 03:11:32.027298  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:11:32.027324  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.030029  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.030410  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.030441  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.030639  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.030859  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.031038  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.031225  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.112944  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:11:32.113014  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:11:32.141177  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:11:32.141264  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 03:11:32.170370  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:11:32.170430  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 03:11:32.197884  927850 provision.go:87] duration metric: took 644.176956ms to configureAuth
	I0308 03:11:32.197915  927850 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:11:32.198159  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:32.198253  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.202754  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.203255  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.203287  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.203477  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.203691  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.203915  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.204124  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.204346  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:32.204564  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:32.204582  927850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:11:32.494880  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:11:32.494907  927850 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:11:32.494916  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetURL
	I0308 03:11:32.496428  927850 main.go:141] libmachine: (ha-576225-m03) DBG | Using libvirt version 6000000
	I0308 03:11:32.499346  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.499789  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.499827  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.500112  927850 main.go:141] libmachine: Docker is up and running!
	I0308 03:11:32.500131  927850 main.go:141] libmachine: Reticulating splines...
	I0308 03:11:32.500140  927850 client.go:171] duration metric: took 25.04507583s to LocalClient.Create
	I0308 03:11:32.500168  927850 start.go:167] duration metric: took 25.045143066s to libmachine.API.Create "ha-576225"
	I0308 03:11:32.500179  927850 start.go:293] postStartSetup for "ha-576225-m03" (driver="kvm2")
	I0308 03:11:32.500189  927850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:11:32.500206  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.500461  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:11:32.500493  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.502835  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.503257  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.503287  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.503472  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.503664  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.503859  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.503980  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.590684  927850 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:11:32.595651  927850 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:11:32.595684  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:11:32.595762  927850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:11:32.595872  927850 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:11:32.595888  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:11:32.595999  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:11:32.607362  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:11:32.638187  927850 start.go:296] duration metric: took 137.992115ms for postStartSetup
	I0308 03:11:32.638244  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetConfigRaw
	I0308 03:11:32.638850  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:32.641586  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.642000  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.642032  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.642284  927850 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:11:32.642552  927850 start.go:128] duration metric: took 25.207373987s to createHost
	I0308 03:11:32.642588  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.644980  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.645363  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.645386  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.645565  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.645768  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.645922  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.646081  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.646298  927850 main.go:141] libmachine: Using SSH client type: native
	I0308 03:11:32.646511  927850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0308 03:11:32.646535  927850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:11:32.750541  927850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709867492.732176517
	
	I0308 03:11:32.750570  927850 fix.go:216] guest clock: 1709867492.732176517
	I0308 03:11:32.750581  927850 fix.go:229] Guest: 2024-03-08 03:11:32.732176517 +0000 UTC Remote: 2024-03-08 03:11:32.642570633 +0000 UTC m=+172.395509561 (delta=89.605884ms)
	I0308 03:11:32.750606  927850 fix.go:200] guest clock delta is within tolerance: 89.605884ms
	I0308 03:11:32.750613  927850 start.go:83] releasing machines lock for "ha-576225-m03", held for 25.315572264s
	I0308 03:11:32.750637  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.750969  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:32.753597  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.753922  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.753947  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.756408  927850 out.go:177] * Found network options:
	I0308 03:11:32.757804  927850 out.go:177]   - NO_PROXY=192.168.39.251,192.168.39.128
	W0308 03:11:32.759109  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 03:11:32.759134  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:11:32.759150  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759630  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759803  927850 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:11:32.759935  927850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:11:32.759988  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	W0308 03:11:32.760084  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 03:11:32.760107  927850 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 03:11:32.760196  927850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:11:32.760221  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:11:32.762779  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763225  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763266  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.763288  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763374  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.763591  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.763647  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:32.763675  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:32.763785  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.763882  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:11:32.763983  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:11:32.764016  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:32.764134  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:11:32.764282  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:11:33.008382  927850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:11:33.017209  927850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:11:33.017313  927850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:11:33.037249  927850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:11:33.037290  927850 start.go:494] detecting cgroup driver to use...
	I0308 03:11:33.037378  927850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:11:33.055104  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:11:33.070739  927850 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:11:33.070810  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:11:33.085894  927850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:11:33.102069  927850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:11:33.231998  927850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:11:33.385442  927850 docker.go:233] disabling docker service ...
	I0308 03:11:33.385507  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:11:33.403675  927850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:11:33.419868  927850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:11:33.570788  927850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:11:33.702817  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:11:33.720244  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:11:33.742357  927850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:11:33.742427  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.754938  927850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:11:33.754988  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.767118  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.779178  927850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:11:33.790949  927850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:11:33.804101  927850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:11:33.814949  927850 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:11:33.814998  927850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:11:33.829548  927850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:11:33.840326  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:11:33.957615  927850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:11:34.114582  927850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:11:34.114681  927850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:11:34.120233  927850 start.go:562] Will wait 60s for crictl version
	I0308 03:11:34.120290  927850 ssh_runner.go:195] Run: which crictl
	I0308 03:11:34.124705  927850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:11:34.171114  927850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:11:34.171214  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:11:34.208566  927850 ssh_runner.go:195] Run: crio --version
	I0308 03:11:34.243311  927850 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:11:34.244885  927850 out.go:177]   - env NO_PROXY=192.168.39.251
	I0308 03:11:34.246353  927850 out.go:177]   - env NO_PROXY=192.168.39.251,192.168.39.128
	I0308 03:11:34.247669  927850 main.go:141] libmachine: (ha-576225-m03) Calling .GetIP
	I0308 03:11:34.250669  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:34.251065  927850 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:11:34.251094  927850 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:11:34.251353  927850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:11:34.256292  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:11:34.270302  927850 mustload.go:65] Loading cluster: ha-576225
	I0308 03:11:34.270571  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:11:34.270842  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:34.270882  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:34.287147  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0308 03:11:34.287662  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:34.288187  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:34.288213  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:34.288624  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:34.288859  927850 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:11:34.290820  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:11:34.291180  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:34.291223  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:34.305635  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0308 03:11:34.306060  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:34.306610  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:34.306645  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:34.306983  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:34.307198  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:11:34.307371  927850 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.17
	I0308 03:11:34.307382  927850 certs.go:194] generating shared ca certs ...
	I0308 03:11:34.307397  927850 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.307518  927850 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:11:34.307556  927850 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:11:34.307565  927850 certs.go:256] generating profile certs ...
	I0308 03:11:34.307657  927850 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:11:34.307686  927850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1
	I0308 03:11:34.307698  927850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.17 192.168.39.254]
	I0308 03:11:34.473425  927850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 ...
	I0308 03:11:34.473460  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1: {Name:mk490d533f12bd08746b8a0548aa53b8f0e67c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.473629  927850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1 ...
	I0308 03:11:34.473647  927850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1: {Name:mk1651ac3b4b39cba47a5428730acc2b58c791b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:11:34.473723  927850 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.9325b7f1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:11:34.473856  927850 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.9325b7f1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:11:34.474067  927850 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:11:34.474091  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:11:34.474107  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:11:34.474120  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:11:34.474133  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:11:34.474143  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:11:34.474155  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:11:34.474165  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:11:34.474179  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:11:34.474226  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:11:34.474263  927850 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:11:34.474273  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:11:34.474293  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:11:34.474317  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:11:34.474337  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:11:34.474373  927850 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:11:34.474409  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:11:34.474423  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:11:34.474435  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:34.474470  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:11:34.477717  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:34.478085  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:11:34.478117  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:34.478266  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:11:34.478441  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:11:34.478587  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:11:34.478712  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:11:34.557613  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0308 03:11:34.564076  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0308 03:11:34.578715  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0308 03:11:34.583722  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0308 03:11:34.603538  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0308 03:11:34.608841  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0308 03:11:34.626715  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0308 03:11:34.631764  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0308 03:11:34.645769  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0308 03:11:34.652430  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0308 03:11:34.667823  927850 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0308 03:11:34.674509  927850 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0308 03:11:34.691729  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:11:34.721230  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:11:34.747759  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:11:34.774333  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:11:34.801229  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0308 03:11:34.831188  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:11:34.859197  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:11:34.885848  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:11:34.912282  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:11:34.937959  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:11:34.963746  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:11:34.990951  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0308 03:11:35.010210  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0308 03:11:35.028687  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0308 03:11:35.046896  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0308 03:11:35.065386  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0308 03:11:35.083334  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0308 03:11:35.101637  927850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0308 03:11:35.119855  927850 ssh_runner.go:195] Run: openssl version
	I0308 03:11:35.126819  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:11:35.140010  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.145696  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.145752  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:11:35.152185  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:11:35.164700  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:11:35.177680  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.184570  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.184623  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:11:35.192134  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:11:35.205079  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:11:35.218196  927850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.223208  927850 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.223256  927850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:11:35.230210  927850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:11:35.242505  927850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:11:35.247494  927850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:11:35.247559  927850 kubeadm.go:928] updating node {m03 192.168.39.17 8443 v1.28.4 crio true true} ...
	I0308 03:11:35.247712  927850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:11:35.247755  927850 kube-vip.go:101] generating kube-vip config ...
	I0308 03:11:35.247796  927850 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:11:35.247840  927850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:11:35.260165  927850 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0308 03:11:35.260211  927850 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0308 03:11:35.271487  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0308 03:11:35.271547  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:11:35.271555  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0308 03:11:35.271574  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:11:35.271585  927850 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0308 03:11:35.271627  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 03:11:35.271639  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:11:35.271647  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 03:11:35.276735  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 03:11:35.276760  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0308 03:11:35.323769  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 03:11:35.323776  927850 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:11:35.323823  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0308 03:11:35.323903  927850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 03:11:35.372383  927850 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 03:11:35.372427  927850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0308 03:11:36.323516  927850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0308 03:11:36.334579  927850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 03:11:36.353738  927850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:11:36.373834  927850 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:11:36.392530  927850 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:11:36.397837  927850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:11:36.412941  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:11:36.535957  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:11:36.558242  927850 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:11:36.558597  927850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:11:36.558649  927850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:11:36.574890  927850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0308 03:11:36.575401  927850 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:11:36.575971  927850 main.go:141] libmachine: Using API Version  1
	I0308 03:11:36.576005  927850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:11:36.576382  927850 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:11:36.576597  927850 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:11:36.576771  927850 start.go:316] joinCluster: &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:11:36.576945  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 03:11:36.576969  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:11:36.580127  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:36.580566  927850 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:11:36.580598  927850 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:11:36.580812  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:11:36.580996  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:11:36.581140  927850 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:11:36.581286  927850 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:11:36.759006  927850 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:11:36.759058  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dt9zvj.jo1ekfffapjlcpt7 --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443"
	I0308 03:12:05.219992  927850 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dt9zvj.jo1ekfffapjlcpt7 --discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-576225-m03 --control-plane --apiserver-advertise-address=192.168.39.17 --apiserver-bind-port=8443": (28.460900188s)
	I0308 03:12:05.220036  927850 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 03:12:05.862267  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-576225-m03 minikube.k8s.io/updated_at=2024_03_08T03_12_05_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=ha-576225 minikube.k8s.io/primary=false
	I0308 03:12:05.995524  927850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-576225-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0308 03:12:06.139985  927850 start.go:318] duration metric: took 29.563204661s to joinCluster
	I0308 03:12:06.140076  927850 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:12:06.141255  927850 out.go:177] * Verifying Kubernetes components...
	I0308 03:12:06.142352  927850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:12:06.140411  927850 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:12:06.473661  927850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:12:06.607643  927850 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:12:06.608018  927850 kapi.go:59] client config for ha-576225: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0308 03:12:06.608128  927850 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.251:8443
	I0308 03:12:06.608464  927850 node_ready.go:35] waiting up to 6m0s for node "ha-576225-m03" to be "Ready" ...
	I0308 03:12:06.608601  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:06.608613  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:06.608623  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:06.608629  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:06.613476  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:07.108987  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:07.109012  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:07.109021  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:07.109024  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:07.113489  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:07.609611  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:07.609654  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:07.609667  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:07.609676  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:07.614855  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:08.109136  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:08.109159  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:08.109169  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:08.109174  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:08.112710  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:08.609205  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:08.609230  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:08.609238  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:08.609243  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:08.613299  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:08.614738  927850 node_ready.go:53] node "ha-576225-m03" has status "Ready":"False"
	I0308 03:12:09.109138  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:09.109171  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:09.109184  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:09.109192  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:09.114081  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:09.609108  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:09.609133  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:09.609142  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:09.609146  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:09.612790  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.109621  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:10.109651  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.109660  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.109664  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.115853  927850 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 03:12:10.609123  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:10.609144  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.609153  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.609164  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.613175  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.614166  927850 node_ready.go:49] node "ha-576225-m03" has status "Ready":"True"
	I0308 03:12:10.614188  927850 node_ready.go:38] duration metric: took 4.005703177s for node "ha-576225-m03" to be "Ready" ...
	I0308 03:12:10.614198  927850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:12:10.614258  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:10.614267  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.614273  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.614280  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.623022  927850 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0308 03:12:10.630027  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.630131  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8qvhp
	I0308 03:12:10.630142  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.630149  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.630154  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.633099  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.633860  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.633878  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.633886  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.633890  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.636909  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.637573  927850 pod_ready.go:92] pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.637592  927850 pod_ready.go:81] duration metric: took 7.542544ms for pod "coredns-5dd5756b68-8qvhp" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.637601  927850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.637661  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pqz96
	I0308 03:12:10.637670  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.637676  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.637683  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.640544  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.641337  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.641351  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.641359  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.641363  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.644006  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.644613  927850 pod_ready.go:92] pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.644629  927850 pod_ready.go:81] duration metric: took 7.0209ms for pod "coredns-5dd5756b68-pqz96" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.644637  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.644688  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225
	I0308 03:12:10.644696  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.644703  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.644705  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.647376  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.647921  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:10.647937  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.647944  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.647948  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.651034  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:10.651665  927850 pod_ready.go:92] pod "etcd-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.651684  927850 pod_ready.go:81] duration metric: took 7.040357ms for pod "etcd-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.651695  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.651758  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m02
	I0308 03:12:10.651767  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.651777  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.651785  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.654568  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.655142  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:10.655161  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.655173  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.655181  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.657901  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:10.658409  927850 pod_ready.go:92] pod "etcd-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:10.658431  927850 pod_ready.go:81] duration metric: took 6.728336ms for pod "etcd-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.658442  927850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:10.809840  927850 request.go:629] Waited for 151.319587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:10.809919  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:10.809926  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:10.809935  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:10.809945  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:10.814979  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:11.009925  927850 request.go:629] Waited for 194.218079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.010026  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.010038  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.010046  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.010051  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.013791  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.209544  927850 request.go:629] Waited for 50.248963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.209624  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.209633  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.209645  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.209655  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.213293  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.409351  927850 request.go:629] Waited for 195.315382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.409429  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.409439  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.409451  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.409459  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.414950  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:11.659366  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:11.659391  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.659404  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.659410  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.662970  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:11.809832  927850 request.go:629] Waited for 146.155336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.809915  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:11.809921  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:11.809929  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:11.809937  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:11.814164  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:12.159173  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:12.159204  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.159217  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.159222  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.163032  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.209462  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:12.209495  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.209504  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.209508  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.213094  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.659197  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:12.659224  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.659234  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.659240  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.662989  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.663966  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:12.663982  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:12.663989  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:12.663992  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:12.667169  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:12.667942  927850 pod_ready.go:102] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:13.159056  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:13.159081  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.159089  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.159094  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.162701  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:13.163430  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:13.163445  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.163452  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.163470  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.166687  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:13.659292  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:13.659317  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.659326  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.659331  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.663420  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:13.664337  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:13.664353  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:13.664360  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:13.664364  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:13.667368  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:14.159557  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:14.159587  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.159600  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.159605  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.163923  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.164807  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:14.164830  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.164841  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.164847  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.168939  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.658833  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:14.658890  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.658902  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.658908  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.663084  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:14.664159  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:14.664177  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:14.664184  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:14.664188  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:14.667419  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:14.668371  927850 pod_ready.go:102] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:15.159243  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:15.159266  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.159275  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.159281  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.163078  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:15.163734  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:15.163750  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.163757  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.163760  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.168506  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:15.659119  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:15.659145  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.659156  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.659162  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.663478  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:15.664478  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:15.664492  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:15.664500  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:15.664504  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:15.667813  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.158923  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:16.158951  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.158960  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.158964  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.162787  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.163510  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:16.163532  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.163544  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.163552  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.169472  927850 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 03:12:16.658897  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225-m03
	I0308 03:12:16.658918  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.658926  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.658929  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.662884  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.663730  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:16.663746  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.663754  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.663757  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.667100  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.667648  927850 pod_ready.go:92] pod "etcd-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.667674  927850 pod_ready.go:81] duration metric: took 6.009223937s for pod "etcd-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.667694  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.667755  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225
	I0308 03:12:16.667765  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.667775  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.667782  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.671228  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.671999  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:16.672015  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.672022  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.672027  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.675065  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.675620  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.675642  927850 pod_ready.go:81] duration metric: took 7.93823ms for pod "kube-apiserver-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.675654  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.675723  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m02
	I0308 03:12:16.675732  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.675739  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.675743  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.678782  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:16.679529  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:16.679549  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.679559  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.679564  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.682503  927850 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 03:12:16.683069  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:16.683085  927850 pod_ready.go:81] duration metric: took 7.416749ms for pod "kube-apiserver-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.683093  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:16.809582  927850 request.go:629] Waited for 126.434854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:16.809657  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:16.809665  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:16.809673  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:16.809681  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:16.814238  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:17.009545  927850 request.go:629] Waited for 194.336517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.009624  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.009641  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.009652  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.009662  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.013125  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.210191  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:17.210221  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.210230  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.210234  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.213437  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.409365  927850 request.go:629] Waited for 195.326021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.409428  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.409433  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.409441  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.409445  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.412712  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.684031  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:17.684058  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.684066  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.684070  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.687840  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:17.810060  927850 request.go:629] Waited for 121.330314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.810141  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:17.810151  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:17.810161  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:17.810166  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:17.814919  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:18.183444  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:18.183484  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.183493  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.183496  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.187729  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:18.209863  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:18.209893  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.209904  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.209913  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.213732  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.683850  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:18.683875  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.683883  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.683887  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.687801  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.688889  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:18.688907  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:18.688915  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:18.688920  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:18.692757  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:18.694057  927850 pod_ready.go:102] pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace has status "Ready":"False"
	I0308 03:12:19.183449  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:19.183473  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.183481  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.183487  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.187961  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:19.189192  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:19.189216  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.189229  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.189236  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.192925  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:19.683679  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:19.683709  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.683718  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.683722  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.687602  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:19.688513  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:19.688533  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:19.688542  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:19.688547  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:19.692661  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:20.183275  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:20.183297  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.183306  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.183311  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.188008  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:20.189573  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:20.189597  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.189610  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.189616  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.193431  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.683299  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-576225-m03
	I0308 03:12:20.683323  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.683330  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.683334  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.686816  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.687720  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:20.687740  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.687750  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.687754  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.691161  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.692062  927850 pod_ready.go:92] pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:20.692085  927850 pod_ready.go:81] duration metric: took 4.008983643s for pod "kube-apiserver-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.692099  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.692181  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225
	I0308 03:12:20.692193  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.692203  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.692256  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.696116  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.696802  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:20.696823  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.696834  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.696842  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.700077  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:20.700683  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:20.700707  927850 pod_ready.go:81] duration metric: took 8.599475ms for pod "kube-controller-manager-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.700720  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:20.810081  927850 request.go:629] Waited for 109.23929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:12:20.810175  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m02
	I0308 03:12:20.810183  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:20.810193  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:20.810204  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:20.814972  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.009133  927850 request.go:629] Waited for 193.223791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:21.009211  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:21.009223  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.009231  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.009235  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.013361  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.014100  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.014123  927850 pod_ready.go:81] duration metric: took 313.394468ms for pod "kube-controller-manager-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.014138  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.209143  927850 request.go:629] Waited for 194.924117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m03
	I0308 03:12:21.209228  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-576225-m03
	I0308 03:12:21.209236  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.209246  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.209262  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.213302  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.409339  927850 request.go:629] Waited for 195.303192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.409430  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.409437  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.409449  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.409457  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.414090  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.414729  927850 pod_ready.go:92] pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.414749  927850 pod_ready.go:81] duration metric: took 400.602928ms for pod "kube-controller-manager-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.414761  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gqc9f" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.610182  927850 request.go:629] Waited for 195.322305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gqc9f
	I0308 03:12:21.610249  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gqc9f
	I0308 03:12:21.610255  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.610262  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.610270  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.614335  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:21.809352  927850 request.go:629] Waited for 194.313013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.809447  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:21.809457  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:21.809465  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:21.809469  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:21.813130  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:21.813626  927850 pod_ready.go:92] pod "kube-proxy-gqc9f" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:21.813651  927850 pod_ready.go:81] duration metric: took 398.880333ms for pod "kube-proxy-gqc9f" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:21.813664  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.010228  927850 request.go:629] Waited for 196.450548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:12:22.010311  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcmj2
	I0308 03:12:22.010324  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.010336  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.010343  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.014603  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:22.210014  927850 request.go:629] Waited for 194.37125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:22.210112  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:22.210119  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.210129  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.210160  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.213783  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:22.214460  927850 pod_ready.go:92] pod "kube-proxy-pcmj2" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:22.214487  927850 pod_ready.go:81] duration metric: took 400.8134ms for pod "kube-proxy-pcmj2" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.214503  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.410068  927850 request.go:629] Waited for 195.476035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:12:22.410188  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vjfqv
	I0308 03:12:22.410202  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.410216  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.410222  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.414262  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:22.609139  927850 request.go:629] Waited for 194.283786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:22.609250  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:22.609263  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.609288  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.609295  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.612617  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:22.613243  927850 pod_ready.go:92] pod "kube-proxy-vjfqv" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:22.613287  927850 pod_ready.go:81] duration metric: took 398.759086ms for pod "kube-proxy-vjfqv" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.613302  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:22.809207  927850 request.go:629] Waited for 195.786947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:12:22.809297  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225
	I0308 03:12:22.809306  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:22.809315  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:22.809319  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:22.813232  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.009295  927850 request.go:629] Waited for 195.287024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:23.009365  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225
	I0308 03:12:23.009372  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.009383  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.009391  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.013698  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:23.014272  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.014293  927850 pod_ready.go:81] duration metric: took 400.984379ms for pod "kube-scheduler-ha-576225" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.014302  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.209399  927850 request.go:629] Waited for 195.012698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:12:23.209480  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m02
	I0308 03:12:23.209485  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.209502  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.209511  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.213523  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.409989  927850 request.go:629] Waited for 195.367607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:23.410072  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m02
	I0308 03:12:23.410080  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.410092  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.410113  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.413885  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.414628  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.414668  927850 pod_ready.go:81] duration metric: took 400.35686ms for pod "kube-scheduler-ha-576225-m02" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.414680  927850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.609610  927850 request.go:629] Waited for 194.848328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m03
	I0308 03:12:23.609683  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-576225-m03
	I0308 03:12:23.609688  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.609696  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.609700  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.613726  927850 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 03:12:23.809995  927850 request.go:629] Waited for 195.322339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:23.810090  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes/ha-576225-m03
	I0308 03:12:23.810101  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.810114  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.810123  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.815020  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:23.815865  927850 pod_ready.go:92] pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace has status "Ready":"True"
	I0308 03:12:23.815889  927850 pod_ready.go:81] duration metric: took 401.202158ms for pod "kube-scheduler-ha-576225-m03" in "kube-system" namespace to be "Ready" ...
	I0308 03:12:23.815904  927850 pod_ready.go:38] duration metric: took 13.201695841s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:12:23.815923  927850 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:12:23.815993  927850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:12:23.834635  927850 api_server.go:72] duration metric: took 17.694513051s to wait for apiserver process to appear ...
	I0308 03:12:23.834667  927850 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:12:23.834686  927850 api_server.go:253] Checking apiserver healthz at https://192.168.39.251:8443/healthz ...
	I0308 03:12:23.846970  927850 api_server.go:279] https://192.168.39.251:8443/healthz returned 200:
	ok
	I0308 03:12:23.847059  927850 round_trippers.go:463] GET https://192.168.39.251:8443/version
	I0308 03:12:23.847070  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:23.847097  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:23.847109  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:23.848426  927850 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 03:12:23.848488  927850 api_server.go:141] control plane version: v1.28.4
	I0308 03:12:23.848502  927850 api_server.go:131] duration metric: took 13.827518ms to wait for apiserver health ...
	I0308 03:12:23.848514  927850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:12:24.009793  927850 request.go:629] Waited for 161.190738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.009892  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.009904  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.009919  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.009927  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.017361  927850 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0308 03:12:24.023982  927850 system_pods.go:59] 24 kube-system pods found
	I0308 03:12:24.024011  927850 system_pods.go:61] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:12:24.024015  927850 system_pods.go:61] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:12:24.024019  927850 system_pods.go:61] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:12:24.024023  927850 system_pods.go:61] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:12:24.024027  927850 system_pods.go:61] "etcd-ha-576225-m03" [0116b1fc-b67f-4b77-b0df-2e467f872a40] Running
	I0308 03:12:24.024029  927850 system_pods.go:61] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:12:24.024033  927850 system_pods.go:61] "kindnet-j425g" [12209f2c-d279-4280-bb13-fe49af81cfea] Running
	I0308 03:12:24.024037  927850 system_pods.go:61] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:12:24.024042  927850 system_pods.go:61] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:12:24.024048  927850 system_pods.go:61] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:12:24.024055  927850 system_pods.go:61] "kube-apiserver-ha-576225-m03" [75efc1d4-9ebb-4e79-bb4f-1cbc58b7114f] Running
	I0308 03:12:24.024061  927850 system_pods.go:61] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:12:24.024073  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:12:24.024078  927850 system_pods.go:61] "kube-controller-manager-ha-576225-m03" [d86f869b-b8bc-4f8b-b039-d73f36b2c29c] Running
	I0308 03:12:24.024084  927850 system_pods.go:61] "kube-proxy-gqc9f" [ef6598e1-d792-44b3-b0a7-4ce4b80b67d8] Running
	I0308 03:12:24.024091  927850 system_pods.go:61] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:12:24.024095  927850 system_pods.go:61] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:12:24.024101  927850 system_pods.go:61] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:12:24.024104  927850 system_pods.go:61] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:12:24.024110  927850 system_pods.go:61] "kube-scheduler-ha-576225-m03" [d0dc5765-5042-4946-888a-19a4e65ecf2e] Running
	I0308 03:12:24.024113  927850 system_pods.go:61] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:12:24.024117  927850 system_pods.go:61] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:12:24.024120  927850 system_pods.go:61] "kube-vip-ha-576225-m03" [59018698-49da-41e2-b4a5-9825edc8ae87] Running
	I0308 03:12:24.024125  927850 system_pods.go:61] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:12:24.024132  927850 system_pods.go:74] duration metric: took 175.610989ms to wait for pod list to return data ...
	I0308 03:12:24.024143  927850 default_sa.go:34] waiting for default service account to be created ...
	I0308 03:12:24.209584  927850 request.go:629] Waited for 185.351941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:12:24.209648  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/default/serviceaccounts
	I0308 03:12:24.209653  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.209662  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.209675  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.213799  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:24.213952  927850 default_sa.go:45] found service account: "default"
	I0308 03:12:24.213972  927850 default_sa.go:55] duration metric: took 189.816018ms for default service account to be created ...
	I0308 03:12:24.213983  927850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 03:12:24.409209  927850 request.go:629] Waited for 195.138277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.409289  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/namespaces/kube-system/pods
	I0308 03:12:24.409297  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.409308  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.409323  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.416504  927850 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0308 03:12:24.425442  927850 system_pods.go:86] 24 kube-system pods found
	I0308 03:12:24.425469  927850 system_pods.go:89] "coredns-5dd5756b68-8qvhp" [7686e8de-1f0a-4952-822a-22e888b17da3] Running
	I0308 03:12:24.425475  927850 system_pods.go:89] "coredns-5dd5756b68-pqz96" [e2bf0fdf-7908-4600-8e88-7496688efb0d] Running
	I0308 03:12:24.425479  927850 system_pods.go:89] "etcd-ha-576225" [552c1e9d-8d4d-4353-9f4b-a16d2842a6db] Running
	I0308 03:12:24.425483  927850 system_pods.go:89] "etcd-ha-576225-m02" [c98d6538-de7b-4bc2-add6-1ecca4c1d2de] Running
	I0308 03:12:24.425487  927850 system_pods.go:89] "etcd-ha-576225-m03" [0116b1fc-b67f-4b77-b0df-2e467f872a40] Running
	I0308 03:12:24.425492  927850 system_pods.go:89] "kindnet-dxqvf" [68b9ef4f-0693-425c-b9e5-3232abe019b1] Running
	I0308 03:12:24.425496  927850 system_pods.go:89] "kindnet-j425g" [12209f2c-d279-4280-bb13-fe49af81cfea] Running
	I0308 03:12:24.425504  927850 system_pods.go:89] "kindnet-w8zww" [45310215-8829-47dc-9632-3a16d41d20ed] Running
	I0308 03:12:24.425512  927850 system_pods.go:89] "kube-apiserver-ha-576225" [1114e8bb-763b-4e4f-81f2-347808472cf4] Running
	I0308 03:12:24.425516  927850 system_pods.go:89] "kube-apiserver-ha-576225-m02" [17bf299a-ef4d-4105-932b-1ed8e313a01f] Running
	I0308 03:12:24.425523  927850 system_pods.go:89] "kube-apiserver-ha-576225-m03" [75efc1d4-9ebb-4e79-bb4f-1cbc58b7114f] Running
	I0308 03:12:24.425528  927850 system_pods.go:89] "kube-controller-manager-ha-576225" [c0a2335c-4478-454b-9d5b-4eec3e40cbe8] Running
	I0308 03:12:24.425535  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m02" [b82fe36c-233d-483c-99ac-c272a9f88b28] Running
	I0308 03:12:24.425539  927850 system_pods.go:89] "kube-controller-manager-ha-576225-m03" [d86f869b-b8bc-4f8b-b039-d73f36b2c29c] Running
	I0308 03:12:24.425546  927850 system_pods.go:89] "kube-proxy-gqc9f" [ef6598e1-d792-44b3-b0a7-4ce4b80b67d8] Running
	I0308 03:12:24.425552  927850 system_pods.go:89] "kube-proxy-pcmj2" [43be60bc-c064-4f45-9653-15b886260114] Running
	I0308 03:12:24.425558  927850 system_pods.go:89] "kube-proxy-vjfqv" [d0b85f25-a586-45fc-b0a5-957508dc720f] Running
	I0308 03:12:24.425562  927850 system_pods.go:89] "kube-scheduler-ha-576225" [4e1905fd-3e20-4b63-9bdc-2635cc6223f5] Running
	I0308 03:12:24.425568  927850 system_pods.go:89] "kube-scheduler-ha-576225-m02" [54cc83d1-3413-42a3-9498-86dd70075c56] Running
	I0308 03:12:24.425572  927850 system_pods.go:89] "kube-scheduler-ha-576225-m03" [d0dc5765-5042-4946-888a-19a4e65ecf2e] Running
	I0308 03:12:24.425578  927850 system_pods.go:89] "kube-vip-ha-576225" [ef520407-8443-46ea-a158-0eb26300450f] Running
	I0308 03:12:24.425582  927850 system_pods.go:89] "kube-vip-ha-576225-m02" [4d2d842e-c988-40bf-aa6c-b534aa87cdb3] Running
	I0308 03:12:24.425588  927850 system_pods.go:89] "kube-vip-ha-576225-m03" [59018698-49da-41e2-b4a5-9825edc8ae87] Running
	I0308 03:12:24.425592  927850 system_pods.go:89] "storage-provisioner" [73ce39c2-3ef3-4c2a-996c-47a02fd12f4e] Running
	I0308 03:12:24.425601  927850 system_pods.go:126] duration metric: took 211.612108ms to wait for k8s-apps to be running ...
	I0308 03:12:24.425609  927850 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 03:12:24.425655  927850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:12:24.444036  927850 system_svc.go:56] duration metric: took 18.418896ms WaitForService to wait for kubelet
	I0308 03:12:24.444065  927850 kubeadm.go:576] duration metric: took 18.303949873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:12:24.444104  927850 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:12:24.609516  927850 request.go:629] Waited for 165.336121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.251:8443/api/v1/nodes
	I0308 03:12:24.609597  927850 round_trippers.go:463] GET https://192.168.39.251:8443/api/v1/nodes
	I0308 03:12:24.609602  927850 round_trippers.go:469] Request Headers:
	I0308 03:12:24.609610  927850 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0308 03:12:24.609616  927850 round_trippers.go:473]     Accept: application/json, */*
	I0308 03:12:24.614024  927850 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 03:12:24.615227  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615252  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615263  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615267  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615271  927850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:12:24.615274  927850 node_conditions.go:123] node cpu capacity is 2
	I0308 03:12:24.615278  927850 node_conditions.go:105] duration metric: took 171.169138ms to run NodePressure ...
	I0308 03:12:24.615290  927850 start.go:240] waiting for startup goroutines ...
	I0308 03:12:24.615311  927850 start.go:254] writing updated cluster config ...
	I0308 03:12:24.615596  927850 ssh_runner.go:195] Run: rm -f paused
	I0308 03:12:24.671690  927850 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 03:12:24.673822  927850 out.go:177] * Done! kubectl is now configured to use "ha-576225" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.807219864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8c34acd-40e5-4539-aa2e-6df940c2a79f name=/runtime.v1.RuntimeService/Version
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.810945366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22efe9e4-e8a0-4ae6-b5d6-f4c8af683913 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.811492865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867810811467704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22efe9e4-e8a0-4ae6-b5d6-f4c8af683913 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.812290578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4631dd0-a724-4924-9393-7725559c5c57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.812421223Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4631dd0-a724-4924-9393-7725559c5c57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.812706610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4631dd0-a724-4924-9393-7725559c5c57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.863187866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03828b3f-d328-4d0f-b057-a46494a551c7 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.863286819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03828b3f-d328-4d0f-b057-a46494a551c7 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.865102940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76cf46b5-437d-4ef8-9eb2-49e90f0bdfcb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.866032279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867810866002831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76cf46b5-437d-4ef8-9eb2-49e90f0bdfcb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.867398004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fd36e0e-207c-4377-9fca-617b048245f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.867485607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fd36e0e-207c-4377-9fca-617b048245f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.867778756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fd36e0e-207c-4377-9fca-617b048245f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.918667061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4b2a1fe-3482-4a5d-a5b9-0c81d0441fa5 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.918773441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4b2a1fe-3482-4a5d-a5b9-0c81d0441fa5 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.919874767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a11d22d1-ed72-4671-8046-d776879509f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.920485905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709867810920463849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a11d22d1-ed72-4671-8046-d776879509f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.920954064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93e42c00-83d7-4d4e-a0af-0a0a4847d9da name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.921028480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93e42c00-83d7-4d4e-a0af-0a0a4847d9da name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.921445056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00534de89b2ec5afed232d2db5505105565342ad6817df021c7ff6d3390f2774,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709867383321556743,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058991457,Labels
:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31099fe894975d3193afde5679ec1bc1cede556b07d27ade562e58f6ea919881,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867361355791690,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93e42c00-83d7-4d4e-a0af-0a0a4847d9da name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.954664011Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d699270-13ca-463f-8532-32c157080799 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.955380635Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-9594n,Uid:d8bc0fba-1a5c-4082-a505-a0653c59180a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867546071948510,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:12:25.749871868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-8qvhp,Uid:7686e8de-1f0a-4952-822a-22e888b17da3,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1709867383030500069,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:42.688652735Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867383030054852,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{ku
bectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T03:09:42.697505082Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-pqz96,Uid:e2bf0fdf-7908-4600-8e88-7496688efb0d,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1709867383010490630,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:42.695910860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&PodSandboxMetadata{Name:kindnet-dxqvf,Uid:68b9ef4f-0693-425c-b9e5-3232abe019b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867378771511784,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annota
tions:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:38.437897749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&PodSandboxMetadata{Name:kube-proxy-pcmj2,Uid:43be60bc-c064-4f45-9653-15b886260114,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867378760052541,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:09:38.419906077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&PodSandboxMetadata{Name:etcd-ha-576225,Uid:26cdb4c7afaf223219da4d02f01a1ea4,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1709867358960654669,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.251:2379,kubernetes.io/config.hash: 26cdb4c7afaf223219da4d02f01a1ea4,kubernetes.io/config.seen: 2024-03-08T03:09:18.435084423Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-576225,Uid:fb9fc89b7fdb50461eab2dcf2451250e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358952981636,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b
7fdb50461eab2dcf2451250e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.251:8443,kubernetes.io/config.hash: fb9fc89b7fdb50461eab2dcf2451250e,kubernetes.io/config.seen: 2024-03-08T03:09:18.435085785Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-576225,Uid:b43f1b4602f1b00b137428ffec94b74a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358944269694,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b43f1b4602f1b00b137428ffec94b74a,kubernetes.io/config.seen: 2024-03-08T03:09:18.435086681Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-576225,Uid:af200b4f08e9aba6d5619bb32fa9f733,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358933251081,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: af200b4f08e9aba6d5619bb32fa9f733,kubernetes.io/config.seen: 2024-03-08T03:09:18.435079820Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-576225,Uid:79332678c9cff5037e42e087635740e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709867358928007895,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{kubernetes.io/config.hash: 79332678c9cff5037e42e087635740e0,kubernetes.io/config.seen: 2024-03-08T03:09:18.435083364Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2d699270-13ca-463f-8532-32c157080799 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.956101940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f91642e-fe15-44d5-be90-0e4390822429 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.956154457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f91642e-fe15-44d5-be90-0e4390822429 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:16:50 ha-576225 crio[675]: time="2024-03-08 03:16:50.956621209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709867547347024603,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f,PodSandboxId:2f7897e64ae109f5074c819b99cb326b7fe2dabe5cbd88ecc4dc6eec6332659a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709867448399916021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709867448392195482,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383283464505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709867383257711758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations
:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e,PodSandboxId:88d456c41e9f64ca27d8b576aa764c296910e14081e0f3910e69f75431245732,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709867381058
991457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709867379130502988,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709867359282233422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2,PodSandboxId:9d1b14daf08eec7cf8312f12dcfb5d1c86429dba81d3414878015ca52dcbda0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709867359246657429,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.na
me: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c,PodSandboxId:2e14d9826288fc7481dc4642d5da3a18efa95b2ea9e06cd3cd1532e07ded5325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709867359157763510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709867359110652467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f91642e-fe15-44d5-be90-0e4390822429 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5282718f03eb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   0524f01439e2f       busybox-5b5d89c9d6-9594n
	6dcd572cdc4ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   2f7897e64ae10       storage-provisioner
	c751323fea4d9       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   a6b1803470779       kube-vip-ha-576225
	00534de89b2ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   2f7897e64ae10       storage-provisioner
	c29d3c09ae3c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   632fde5a7793c       coredns-5dd5756b68-8qvhp
	e6551e5e70b01       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   5d9f21a723332       coredns-5dd5756b68-pqz96
	6775e52109dca       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   88d456c41e9f6       kindnet-dxqvf
	da2c9bb706201       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   9f60642cbf5af       kube-proxy-pcmj2
	31099fe894975       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   a6b1803470779       kube-vip-ha-576225
	79db3710d20d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   5b9d25fbfde63       etcd-ha-576225
	556a4677df889       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   9d1b14daf08ee       kube-controller-manager-ha-576225
	fe007de6550da       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   2e14d9826288f       kube-apiserver-ha-576225
	77dc7f2494354       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   7a8444878ab4c       kube-scheduler-ha-576225
	
	
	==> coredns [c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788] <==
	[INFO] 10.244.0.4:57715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185202s
	[INFO] 10.244.0.4:58493 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000187997s
	[INFO] 10.244.0.4:51494 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142605s
	[INFO] 10.244.0.4:36385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003322395s
	[INFO] 10.244.0.4:39290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119187s
	[INFO] 10.244.0.4:54781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156597s
	[INFO] 10.244.2.2:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156855s
	[INFO] 10.244.2.2:51544 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122332s
	[INFO] 10.244.2.2:36974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001216836s
	[INFO] 10.244.2.2:46648 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079695s
	[INFO] 10.244.2.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116087s
	[INFO] 10.244.1.2:55081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181347s
	[INFO] 10.244.1.2:33288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001414035s
	[INFO] 10.244.1.2:34740 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200343s
	[INFO] 10.244.1.2:34593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089308s
	[INFO] 10.244.0.4:57556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168693s
	[INFO] 10.244.0.4:55624 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070785s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203686s
	[INFO] 10.244.2.2:38702 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143629s
	[INFO] 10.244.2.2:39439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:41980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276421s
	[INFO] 10.244.0.4:55612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118127s
	[INFO] 10.244.0.4:54270 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081257s
	[INFO] 10.244.2.2:49847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192089s
	[INFO] 10.244.2.2:45358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198525s
	
	
	==> coredns [e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df] <==
	[INFO] 10.244.1.2:40496 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000539144s
	[INFO] 10.244.1.2:44875 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001406973s
	[INFO] 10.244.0.4:34507 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002484084s
	[INFO] 10.244.0.4:41817 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000191005s
	[INFO] 10.244.2.2:46018 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001768234s
	[INFO] 10.244.2.2:44074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245211s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020143s
	[INFO] 10.244.1.2:36967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124177s
	[INFO] 10.244.1.2:49099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135326s
	[INFO] 10.244.1.2:38253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253563s
	[INFO] 10.244.1.2:39140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097524s
	[INFO] 10.244.0.4:50886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066375s
	[INFO] 10.244.0.4:36001 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044745s
	[INFO] 10.244.2.2:52701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189269s
	[INFO] 10.244.1.2:56384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178001s
	[INFO] 10.244.1.2:57745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181456s
	[INFO] 10.244.1.2:36336 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125903s
	[INFO] 10.244.0.4:51847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152568s
	[INFO] 10.244.0.4:40398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222601s
	[INFO] 10.244.2.2:39215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179733s
	[INFO] 10.244.2.2:44810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018976s
	[INFO] 10.244.1.2:53930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169054s
	[INFO] 10.244.1.2:39490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132254s
	[INFO] 10.244.1.2:45653 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129104s
	[INFO] 10.244.1.2:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154053s
	
	
	==> describe nodes <==
	Name:               ha-576225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:12:36 +0000   Fri, 08 Mar 2024 03:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-576225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1732a5e385cf44ce86b216e3f63b18e9
	  System UUID:                1732a5e3-85cf-44ce-86b2-16e3f63b18e9
	  Boot ID:                    22459aef-7ea9-46db-b507-1fb97d6edacd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9594n             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-5dd5756b68-8qvhp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m13s
	  kube-system                 coredns-5dd5756b68-pqz96             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m13s
	  kube-system                 etcd-ha-576225                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 kindnet-dxqvf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m13s
	  kube-system                 kube-apiserver-ha-576225             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-controller-manager-ha-576225    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-proxy-pcmj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-scheduler-ha-576225             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-vip-ha-576225                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m11s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m33s (x7 over 7m33s)  kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m33s (x8 over 7m33s)  kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x8 over 7m33s)  kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m22s                  kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s                  kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s                  kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m14s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal  NodeReady                7m9s                   kubelet          Node ha-576225 status is now: NodeReady
	  Normal  RegisteredNode           5m45s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal  RegisteredNode           4m31s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	
	
	Name:               ha-576225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:10:36 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:13:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 08 Mar 2024 03:12:35 +0000   Fri, 08 Mar 2024 03:14:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-576225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 852d29792aec4a87b8b6c74704738411
	  System UUID:                852d2979-2aec-4a87-b8b6-c74704738411
	  Boot ID:                    7dd1b7b9-6e88-4666-a7ad-564e8cd548ef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wlj7r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 etcd-ha-576225-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-w8zww                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-576225-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-ha-576225-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-proxy-vjfqv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-576225-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-576225-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m56s  kube-proxy       
	  Normal  RegisteredNode  6m14s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode  5m45s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode  4m31s  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  NodeNotReady    2m44s  node-controller  Node ha-576225-m02 status is now: NodeNotReady
	
	
	Name:               ha-576225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_12_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:12:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:16:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:12:32 +0000   Fri, 08 Mar 2024 03:12:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-576225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e53bc87ed31a4387be9c7b928f4e70cd
	  System UUID:                e53bc87e-d31a-4387-be9c-7b928f4e70cd
	  Boot ID:                    48eba781-e477-4452-8326-e60054c38dbb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc27d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 etcd-ha-576225-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-j425g                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-ha-576225-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-ha-576225-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-gqc9f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-ha-576225-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-vip-ha-576225-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m47s  kube-proxy       
	  Normal  RegisteredNode  4m49s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal  RegisteredNode  4m45s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal  RegisteredNode  4m31s  node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	
	
	Name:               ha-576225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_13_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:13:32 +0000   Fri, 08 Mar 2024 03:13:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-576225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 524efacfa67040b0afe359afd19efdd6
	  System UUID:                524efacf-a670-40b0-afe3-59afd19efdd6
	  Boot ID:                    d890d781-2a80-445d-89e7-43c2432b0da3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5qbg6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m50s
	  kube-system                 kube-proxy-mk2g8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x5 over 3m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x5 over 3m51s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal  NodeReady                3m42s                  kubelet          Node ha-576225-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar 8 03:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051989] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042634] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.518416] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.422136] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.681949] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 8 03:09] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.056257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063726] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.163955] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153131] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264990] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.215071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060445] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.086248] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.235554] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.086526] kauditd_printk_skb: 40 callbacks suppressed
	[  +2.541733] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[ +10.298670] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.185227] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025] <==
	{"level":"warn","ts":"2024-03-08T03:16:51.22445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.232643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.233441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.236482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.23775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.250659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.258278Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.265227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.268907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.272192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.279654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.284184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.291094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.302474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.306478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.311036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.318301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.324487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.330004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.334226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.337661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.343285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.349095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.356417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-08T03:16:51.37663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ebeb2ab026a2136","from":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:16:51 up 8 min,  0 users,  load average: 0.12, 0.43, 0.28
	Linux ha-576225 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6775e52109dca4a8a51dc7cd939a379b382f5b1d7fa0e9ab441e1fec558db65e] <==
	I0308 03:16:11.961453       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:16:21.968579       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:16:21.968684       1 main.go:227] handling current node
	I0308 03:16:21.968711       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:16:21.968731       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:16:21.968864       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:16:21.968885       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:16:21.968950       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:16:21.968968       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:16:31.983975       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:16:31.984098       1 main.go:227] handling current node
	I0308 03:16:31.984126       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:16:31.984145       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:16:31.984284       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:16:31.984315       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:16:31.984553       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:16:31.984576       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:16:42.010718       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:16:42.010796       1 main.go:227] handling current node
	I0308 03:16:42.010818       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:16:42.010836       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:16:42.010957       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:16:42.010977       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:16:42.011038       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:16:42.011056       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c] <==
	Trace[975446308]:  ---"Txn call completed" 3879ms (03:10:51.511)]
	Trace[975446308]: ---"About to apply patch" 3880ms (03:10:51.511)
	Trace[975446308]: [3.88270775s] [3.88270775s] END
	I0308 03:10:51.513812       1 trace.go:236] Trace[1006107015]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a0d397e-8eaf-48c9-9e1b-eb336f6c6341,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-576225,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (08-Mar-2024 03:10:47.181) (total time: 4332ms):
	Trace[1006107015]: ["GuaranteedUpdate etcd3" audit-id:4a0d397e-8eaf-48c9-9e1b-eb336f6c6341,key:/leases/kube-node-lease/ha-576225,type:*coordination.Lease,resource:leases.coordination.k8s.io 4332ms (03:10:47.181)
	Trace[1006107015]:  ---"Txn call completed" 4331ms (03:10:51.513)]
	Trace[1006107015]: [4.332477664s] [4.332477664s] END
	I0308 03:10:51.515522       1 trace.go:236] Trace[726453465]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b72982c3-a6e8-4744-925c-1e32e2f6783b,client:192.168.39.128,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:45.247) (total time: 6267ms):
	Trace[726453465]: ["Create etcd3" audit-id:b72982c3-a6e8-4744-925c-1e32e2f6783b,key:/events/kube-system/kube-vip-ha-576225-m02.17baab603a97f594,type:*core.Event,resource:events 6267ms (03:10:45.248)
	Trace[726453465]:  ---"Txn call succeeded" 6266ms (03:10:51.515)]
	Trace[726453465]: [6.267573919s] [6.267573919s] END
	I0308 03:10:51.555174       1 trace.go:236] Trace[1361706867]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0f4c7967-9609-4262-af3b-7069631c5b78,client:192.168.39.128,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-576225-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (08-Mar-2024 03:10:47.116) (total time: 4438ms):
	Trace[1361706867]: ["GuaranteedUpdate etcd3" audit-id:0f4c7967-9609-4262-af3b-7069631c5b78,key:/minions/ha-576225-m02,type:*core.Node,resource:nodes 4438ms (03:10:47.116)
	Trace[1361706867]:  ---"Txn call completed" 4393ms (03:10:51.511)
	Trace[1361706867]:  ---"Txn call completed" 41ms (03:10:51.554)]
	Trace[1361706867]: ---"About to apply patch" 4393ms (03:10:51.511)
	Trace[1361706867]: ---"Object stored in database" 41ms (03:10:51.554)
	Trace[1361706867]: [4.43839163s] [4.43839163s] END
	I0308 03:10:51.572082       1 trace.go:236] Trace[520816267]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a32672da-c798-4df5-a30a-db78d2ee4bc1,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:45.885) (total time: 5686ms):
	Trace[520816267]: [5.686200268s] [5.686200268s] END
	I0308 03:10:51.576675       1 trace.go:236] Trace[1188707554]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:53521e53-bcfd-42b6-b12c-4ccc13f6573d,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:44.397) (total time: 7178ms):
	Trace[1188707554]: [7.178975243s] [7.178975243s] END
	I0308 03:10:51.580468       1 trace.go:236] Trace[1763030422]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:64626611-9a16-40fe-a10f-c16277898ecc,client:192.168.39.128,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (08-Mar-2024 03:10:46.399) (total time: 5181ms):
	Trace[1763030422]: [5.181389259s] [5.181389259s] END
	W0308 03:13:34.357866       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.251]
	
	
	==> kube-controller-manager [556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2] <==
	E0308 03:13:00.040221       1 certificate_controller.go:146] Sync csr-f5xth failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-f5xth": the object has been modified; please apply your changes to the latest version and try again
	E0308 03:13:00.058600       1 certificate_controller.go:146] Sync csr-f5xth failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-f5xth": the object has been modified; please apply your changes to the latest version and try again
	I0308 03:13:01.551010       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-576225-m04\" does not exist"
	I0308 03:13:01.590116       1 range_allocator.go:380] "Set node PodCIDR" node="ha-576225-m04" podCIDRs=["10.244.3.0/24"]
	I0308 03:13:01.623631       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tt2g5"
	I0308 03:13:01.630279       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5qbg6"
	I0308 03:13:01.727660       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-k68g4"
	I0308 03:13:01.754785       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-tt2g5"
	I0308 03:13:01.818548       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qbtrf"
	I0308 03:13:01.867541       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-sv66p"
	I0308 03:13:02.540010       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225-m04"
	I0308 03:13:02.540304       1 event.go:307] "Event occurred" object="ha-576225-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller"
	I0308 03:13:09.084526       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-576225-m04"
	I0308 03:14:07.573676       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-576225-m04"
	I0308 03:14:07.575859       1 event.go:307] "Event occurred" object="ha-576225-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-576225-m02 status is now: NodeNotReady"
	I0308 03:14:07.600972       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.620729       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.637292       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.651460       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-wlj7r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.672970       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vjfqv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.681554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.654915ms"
	I0308 03:14:07.682418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="114.514µs"
	I0308 03:14:07.720625       1 event.go:307] "Event occurred" object="kube-system/kindnet-w8zww" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.745980       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:14:07.776956       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-576225-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176] <==
	I0308 03:09:39.528881       1 server_others.go:69] "Using iptables proxy"
	I0308 03:09:39.543990       1 node.go:141] Successfully retrieved node IP: 192.168.39.251
	I0308 03:09:39.609748       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:09:39.609788       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:09:39.612456       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:09:39.612921       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:09:39.613100       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:09:39.613144       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:09:39.614717       1 config.go:188] "Starting service config controller"
	I0308 03:09:39.615182       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:09:39.615246       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:09:39.615253       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:09:39.616111       1 config.go:315] "Starting node config controller"
	I0308 03:09:39.616145       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:09:39.716286       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:09:39.719403       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:09:39.719425       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446] <==
	W0308 03:09:22.701875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:09:22.702015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:09:23.513890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 03:09:23.513999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 03:09:23.530275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:09:23.530459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:09:23.592639       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:09:23.592722       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:09:23.593942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:09:23.593994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:09:23.794105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 03:09:23.794127       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 03:09:23.930026       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:09:23.930102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0308 03:09:25.382141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0308 03:12:25.760214       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-cc27d\": pod busybox-5b5d89c9d6-cc27d is already assigned to node \"ha-576225-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-cc27d" node="ha-576225-m03"
	E0308 03:12:25.760792       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 568c3895-25ab-4967-bebd-d0bbb9203ec4(default/busybox-5b5d89c9d6-cc27d) wasn't assumed so cannot be forgotten"
	E0308 03:12:25.760883       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-cc27d\": pod busybox-5b5d89c9d6-cc27d is already assigned to node \"ha-576225-m03\"" pod="default/busybox-5b5d89c9d6-cc27d"
	I0308 03:12:25.760951       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-cc27d" node="ha-576225-m03"
	E0308 03:13:01.674843       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5qbg6\": pod kindnet-5qbg6 is already assigned to node \"ha-576225-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5qbg6" node="ha-576225-m04"
	E0308 03:13:01.674979       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 8f4975bf-f49e-4f05-b5f7-f8e9fc419bbe(kube-system/kindnet-5qbg6) wasn't assumed so cannot be forgotten"
	E0308 03:13:01.675041       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5qbg6\": pod kindnet-5qbg6 is already assigned to node \"ha-576225-m04\"" pod="kube-system/kindnet-5qbg6"
	I0308 03:13:01.675101       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5qbg6" node="ha-576225-m04"
	E0308 03:13:01.675915       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tt2g5\": pod kube-proxy-tt2g5 is already assigned to node \"ha-576225-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tt2g5" node="ha-576225-m04"
	E0308 03:13:01.676051       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tt2g5\": pod kube-proxy-tt2g5 is already assigned to node \"ha-576225-m04\"" pod="kube-system/kube-proxy-tt2g5"
	
	
	==> kubelet <==
	Mar 08 03:12:29 ha-576225 kubelet[1359]: E0308 03:12:29.006281    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:12:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:12:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:13:29 ha-576225 kubelet[1359]: E0308 03:13:29.008936    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:13:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:13:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:14:29 ha-576225 kubelet[1359]: E0308 03:14:29.005243    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:14:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:14:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:15:29 ha-576225 kubelet[1359]: E0308 03:15:29.004474    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:15:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:15:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:16:29 ha-576225 kubelet[1359]: E0308 03:16:29.001966    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:16:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:16:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:16:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:16:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-576225 -n ha-576225
helpers_test.go:261: (dbg) Run:  kubectl --context ha-576225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (56.37s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (375.58s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-576225 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-576225 -v=7 --alsologtostderr
E0308 03:17:52.008659  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:18:19.692405  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:18:32.256712  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-576225 -v=7 --alsologtostderr: exit status 82 (2m2.72035028s)

                                                
                                                
-- stdout --
	* Stopping node "ha-576225-m04"  ...
	* Stopping node "ha-576225-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:16:52.968337  933211 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:16:52.969624  933211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:52.969642  933211 out.go:304] Setting ErrFile to fd 2...
	I0308 03:16:52.969647  933211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:16:52.970092  933211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:16:52.970379  933211 out.go:298] Setting JSON to false
	I0308 03:16:52.970459  933211 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:52.970813  933211 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:52.970901  933211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:16:52.971066  933211 mustload.go:65] Loading cluster: ha-576225
	I0308 03:16:52.971239  933211 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:16:52.971291  933211 stop.go:39] StopHost: ha-576225-m04
	I0308 03:16:52.971642  933211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:52.971686  933211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:52.986905  933211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45509
	I0308 03:16:52.987353  933211 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:52.987953  933211 main.go:141] libmachine: Using API Version  1
	I0308 03:16:52.987978  933211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:52.988377  933211 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:52.990741  933211 out.go:177] * Stopping node "ha-576225-m04"  ...
	I0308 03:16:52.991867  933211 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 03:16:52.991905  933211 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:16:52.992138  933211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 03:16:52.992167  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:16:52.995394  933211 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:52.995838  933211 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:12:48 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:16:52.995863  933211 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:16:52.996009  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:16:52.996205  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:16:52.996387  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:16:52.996511  933211 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:16:53.081977  933211 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 03:16:53.136996  933211 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 03:16:53.192531  933211 main.go:141] libmachine: Stopping "ha-576225-m04"...
	I0308 03:16:53.192564  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:53.194347  933211 main.go:141] libmachine: (ha-576225-m04) Calling .Stop
	I0308 03:16:53.198129  933211 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 0/120
	I0308 03:16:54.200422  933211 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 1/120
	I0308 03:16:55.202261  933211 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:16:55.203653  933211 main.go:141] libmachine: Machine "ha-576225-m04" was stopped.
	I0308 03:16:55.203672  933211 stop.go:75] duration metric: took 2.211807383s to stop
	I0308 03:16:55.203715  933211 stop.go:39] StopHost: ha-576225-m03
	I0308 03:16:55.204168  933211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:16:55.204219  933211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:16:55.221288  933211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45555
	I0308 03:16:55.221725  933211 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:16:55.222222  933211 main.go:141] libmachine: Using API Version  1
	I0308 03:16:55.222244  933211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:16:55.222584  933211 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:16:55.224515  933211 out.go:177] * Stopping node "ha-576225-m03"  ...
	I0308 03:16:55.225837  933211 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 03:16:55.225861  933211 main.go:141] libmachine: (ha-576225-m03) Calling .DriverName
	I0308 03:16:55.226154  933211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 03:16:55.226184  933211 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHHostname
	I0308 03:16:55.229054  933211 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:55.229519  933211 main.go:141] libmachine: (ha-576225-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:8f:ef", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:11:23 +0000 UTC Type:0 Mac:52:54:00:e1:8f:ef Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-576225-m03 Clientid:01:52:54:00:e1:8f:ef}
	I0308 03:16:55.229553  933211 main.go:141] libmachine: (ha-576225-m03) DBG | domain ha-576225-m03 has defined IP address 192.168.39.17 and MAC address 52:54:00:e1:8f:ef in network mk-ha-576225
	I0308 03:16:55.229704  933211 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHPort
	I0308 03:16:55.229898  933211 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHKeyPath
	I0308 03:16:55.230063  933211 main.go:141] libmachine: (ha-576225-m03) Calling .GetSSHUsername
	I0308 03:16:55.230229  933211 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m03/id_rsa Username:docker}
	I0308 03:16:55.315122  933211 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 03:16:55.370332  933211 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 03:16:55.425761  933211 main.go:141] libmachine: Stopping "ha-576225-m03"...
	I0308 03:16:55.425804  933211 main.go:141] libmachine: (ha-576225-m03) Calling .GetState
	I0308 03:16:55.427461  933211 main.go:141] libmachine: (ha-576225-m03) Calling .Stop
	I0308 03:16:55.431269  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 0/120
	I0308 03:16:56.432852  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 1/120
	I0308 03:16:57.434371  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 2/120
	I0308 03:16:58.436074  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 3/120
	I0308 03:16:59.437374  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 4/120
	I0308 03:17:00.439577  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 5/120
	I0308 03:17:01.441395  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 6/120
	I0308 03:17:02.442840  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 7/120
	I0308 03:17:03.444628  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 8/120
	I0308 03:17:04.445892  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 9/120
	I0308 03:17:05.447336  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 10/120
	I0308 03:17:06.448735  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 11/120
	I0308 03:17:07.450188  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 12/120
	I0308 03:17:08.452232  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 13/120
	I0308 03:17:09.453564  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 14/120
	I0308 03:17:10.455516  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 15/120
	I0308 03:17:11.457017  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 16/120
	I0308 03:17:12.458403  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 17/120
	I0308 03:17:13.460051  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 18/120
	I0308 03:17:14.461705  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 19/120
	I0308 03:17:15.463208  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 20/120
	I0308 03:17:16.464719  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 21/120
	I0308 03:17:17.466255  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 22/120
	I0308 03:17:18.467742  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 23/120
	I0308 03:17:19.469331  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 24/120
	I0308 03:17:20.471166  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 25/120
	I0308 03:17:21.472680  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 26/120
	I0308 03:17:22.474121  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 27/120
	I0308 03:17:23.475485  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 28/120
	I0308 03:17:24.476888  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 29/120
	I0308 03:17:25.478541  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 30/120
	I0308 03:17:26.479881  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 31/120
	I0308 03:17:27.481906  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 32/120
	I0308 03:17:28.483249  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 33/120
	I0308 03:17:29.484556  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 34/120
	I0308 03:17:30.486283  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 35/120
	I0308 03:17:31.487807  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 36/120
	I0308 03:17:32.489180  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 37/120
	I0308 03:17:33.490501  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 38/120
	I0308 03:17:34.492001  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 39/120
	I0308 03:17:35.493802  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 40/120
	I0308 03:17:36.495277  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 41/120
	I0308 03:17:37.496513  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 42/120
	I0308 03:17:38.497787  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 43/120
	I0308 03:17:39.499268  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 44/120
	I0308 03:17:40.501098  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 45/120
	I0308 03:17:41.502684  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 46/120
	I0308 03:17:42.504132  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 47/120
	I0308 03:17:43.505449  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 48/120
	I0308 03:17:44.506713  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 49/120
	I0308 03:17:45.508575  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 50/120
	I0308 03:17:46.510814  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 51/120
	I0308 03:17:47.512167  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 52/120
	I0308 03:17:48.513582  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 53/120
	I0308 03:17:49.515724  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 54/120
	I0308 03:17:50.517352  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 55/120
	I0308 03:17:51.518778  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 56/120
	I0308 03:17:52.520095  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 57/120
	I0308 03:17:53.521569  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 58/120
	I0308 03:17:54.523944  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 59/120
	I0308 03:17:55.526022  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 60/120
	I0308 03:17:56.527531  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 61/120
	I0308 03:17:57.528790  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 62/120
	I0308 03:17:58.530165  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 63/120
	I0308 03:17:59.531502  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 64/120
	I0308 03:18:00.533644  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 65/120
	I0308 03:18:01.535759  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 66/120
	I0308 03:18:02.537814  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 67/120
	I0308 03:18:03.539370  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 68/120
	I0308 03:18:04.540866  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 69/120
	I0308 03:18:05.542728  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 70/120
	I0308 03:18:06.544646  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 71/120
	I0308 03:18:07.545933  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 72/120
	I0308 03:18:08.547738  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 73/120
	I0308 03:18:09.549220  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 74/120
	I0308 03:18:10.551254  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 75/120
	I0308 03:18:11.552720  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 76/120
	I0308 03:18:12.554371  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 77/120
	I0308 03:18:13.555993  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 78/120
	I0308 03:18:14.557587  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 79/120
	I0308 03:18:15.559457  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 80/120
	I0308 03:18:16.560747  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 81/120
	I0308 03:18:17.562094  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 82/120
	I0308 03:18:18.563615  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 83/120
	I0308 03:18:19.565071  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 84/120
	I0308 03:18:20.566540  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 85/120
	I0308 03:18:21.567979  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 86/120
	I0308 03:18:22.569399  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 87/120
	I0308 03:18:23.571853  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 88/120
	I0308 03:18:24.573132  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 89/120
	I0308 03:18:25.574901  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 90/120
	I0308 03:18:26.576470  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 91/120
	I0308 03:18:27.578057  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 92/120
	I0308 03:18:28.579581  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 93/120
	I0308 03:18:29.581693  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 94/120
	I0308 03:18:30.583867  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 95/120
	I0308 03:18:31.585466  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 96/120
	I0308 03:18:32.586887  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 97/120
	I0308 03:18:33.588453  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 98/120
	I0308 03:18:34.589961  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 99/120
	I0308 03:18:35.591832  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 100/120
	I0308 03:18:36.593241  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 101/120
	I0308 03:18:37.594835  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 102/120
	I0308 03:18:38.596297  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 103/120
	I0308 03:18:39.597856  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 104/120
	I0308 03:18:40.599728  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 105/120
	I0308 03:18:41.601266  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 106/120
	I0308 03:18:42.602923  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 107/120
	I0308 03:18:43.604227  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 108/120
	I0308 03:18:44.605468  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 109/120
	I0308 03:18:45.607376  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 110/120
	I0308 03:18:46.608616  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 111/120
	I0308 03:18:47.610091  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 112/120
	I0308 03:18:48.611475  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 113/120
	I0308 03:18:49.612913  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 114/120
	I0308 03:18:50.614394  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 115/120
	I0308 03:18:51.615756  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 116/120
	I0308 03:18:52.616920  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 117/120
	I0308 03:18:53.618462  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 118/120
	I0308 03:18:54.620130  933211 main.go:141] libmachine: (ha-576225-m03) Waiting for machine to stop 119/120
	I0308 03:18:55.621264  933211 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 03:18:55.621376  933211 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0308 03:18:55.623229  933211 out.go:177] 
	W0308 03:18:55.624644  933211 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0308 03:18:55.624670  933211 out.go:239] * 
	* 
	W0308 03:18:55.630904  933211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 03:18:55.632233  933211 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-576225 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-576225 --wait=true -v=7 --alsologtostderr
E0308 03:22:52.009553  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-576225 --wait=true -v=7 --alsologtostderr: (4m9.967640311s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-576225
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-576225 -n ha-576225
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-576225 logs -n 25: (2.01687492s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m04 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp testdata/cp-test.txt                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m04_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03:/home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m03 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-576225 node stop m02 -v=7                                                     | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-576225 node start m02 -v=7                                                    | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-576225 -v=7                                                           | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-576225 -v=7                                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-576225 --wait=true -v=7                                                    | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:18 UTC | 08 Mar 24 03:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-576225                                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:23 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:18:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:18:55.693590  934050 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:18:55.694085  934050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:18:55.694105  934050 out.go:304] Setting ErrFile to fd 2...
	I0308 03:18:55.694112  934050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:18:55.694605  934050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:18:55.695841  934050 out.go:298] Setting JSON to false
	I0308 03:18:55.696834  934050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25262,"bootTime":1709842674,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:18:55.696916  934050 start.go:139] virtualization: kvm guest
	I0308 03:18:55.698848  934050 out.go:177] * [ha-576225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:18:55.700650  934050 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:18:55.702081  934050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:18:55.700714  934050 notify.go:220] Checking for updates...
	I0308 03:18:55.704768  934050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:18:55.706228  934050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:18:55.707640  934050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:18:55.708975  934050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:18:55.710669  934050 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:18:55.710765  934050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:18:55.711179  934050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:18:55.711224  934050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:18:55.727843  934050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0308 03:18:55.728263  934050 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:18:55.728810  934050 main.go:141] libmachine: Using API Version  1
	I0308 03:18:55.728834  934050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:18:55.729228  934050 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:18:55.729449  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.766456  934050 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:18:55.767796  934050 start.go:297] selected driver: kvm2
	I0308 03:18:55.767809  934050 start.go:901] validating driver "kvm2" against &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:18:55.767962  934050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:18:55.768320  934050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:18:55.768413  934050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:18:55.783843  934050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:18:55.784480  934050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:18:55.784553  934050 cni.go:84] Creating CNI manager for ""
	I0308 03:18:55.784564  934050 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0308 03:18:55.784632  934050 start.go:340] cluster config:
	{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:18:55.784781  934050 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:18:55.786541  934050 out.go:177] * Starting "ha-576225" primary control-plane node in "ha-576225" cluster
	I0308 03:18:55.787925  934050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:18:55.787958  934050 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:18:55.787969  934050 cache.go:56] Caching tarball of preloaded images
	I0308 03:18:55.788045  934050 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:18:55.788057  934050 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:18:55.788172  934050 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:18:55.788351  934050 start.go:360] acquireMachinesLock for ha-576225: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:18:55.788395  934050 start.go:364] duration metric: took 26.174µs to acquireMachinesLock for "ha-576225"
	I0308 03:18:55.788410  934050 start.go:96] Skipping create...Using existing machine configuration
	I0308 03:18:55.788418  934050 fix.go:54] fixHost starting: 
	I0308 03:18:55.788665  934050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:18:55.788695  934050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:18:55.803299  934050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0308 03:18:55.803741  934050 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:18:55.804198  934050 main.go:141] libmachine: Using API Version  1
	I0308 03:18:55.804220  934050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:18:55.804535  934050 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:18:55.804749  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.804881  934050 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:18:55.806557  934050 fix.go:112] recreateIfNeeded on ha-576225: state=Running err=<nil>
	W0308 03:18:55.806579  934050 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 03:18:55.808259  934050 out.go:177] * Updating the running kvm2 "ha-576225" VM ...
	I0308 03:18:55.809487  934050 machine.go:94] provisionDockerMachine start ...
	I0308 03:18:55.809508  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.809729  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:55.812039  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.812501  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:55.812527  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.812668  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:55.812832  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.812975  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.813124  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:55.813315  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:55.813500  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:55.813512  934050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 03:18:55.936168  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:18:55.936204  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:55.936458  934050 buildroot.go:166] provisioning hostname "ha-576225"
	I0308 03:18:55.936487  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:55.936709  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:55.939467  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.939922  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:55.939953  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.940054  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:55.940236  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.940387  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.940547  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:55.940794  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:55.940984  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:55.940996  934050 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225 && echo "ha-576225" | sudo tee /etc/hostname
	I0308 03:18:56.076036  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:18:56.076077  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.078815  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.079249  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.079273  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.079455  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.079669  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.079824  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.079961  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.080106  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:56.080285  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:56.080317  934050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:18:56.198665  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:18:56.198719  934050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:18:56.198736  934050 buildroot.go:174] setting up certificates
	I0308 03:18:56.198746  934050 provision.go:84] configureAuth start
	I0308 03:18:56.198754  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:56.199059  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:18:56.201938  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.202357  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.202383  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.202555  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.205072  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.205412  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.205446  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.205564  934050 provision.go:143] copyHostCerts
	I0308 03:18:56.205616  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:18:56.205662  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:18:56.205675  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:18:56.205768  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:18:56.205883  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:18:56.205910  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:18:56.205917  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:18:56.205957  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:18:56.206034  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:18:56.206059  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:18:56.206068  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:18:56.206099  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:18:56.206187  934050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225 san=[127.0.0.1 192.168.39.251 ha-576225 localhost minikube]
	I0308 03:18:56.295338  934050 provision.go:177] copyRemoteCerts
	I0308 03:18:56.295399  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:18:56.295429  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.297940  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.298258  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.298290  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.298420  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.298612  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.298793  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.298926  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:18:56.389721  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:18:56.389790  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:18:56.419979  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:18:56.420044  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0308 03:18:56.447385  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:18:56.447438  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:18:56.474531  934050 provision.go:87] duration metric: took 275.770203ms to configureAuth
	I0308 03:18:56.474558  934050 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:18:56.474768  934050 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:18:56.474845  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.477520  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.477839  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.477863  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.478024  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.478218  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.478362  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.478483  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.478645  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:56.478860  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:56.478887  934050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:20:27.318236  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:20:27.318272  934050 machine.go:97] duration metric: took 1m31.5087671s to provisionDockerMachine
	I0308 03:20:27.318288  934050 start.go:293] postStartSetup for "ha-576225" (driver="kvm2")
	I0308 03:20:27.318300  934050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:20:27.318336  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.318757  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:20:27.318789  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.321952  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.322409  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.322439  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.322609  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.322809  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.322966  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.323108  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.413061  934050 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:20:27.417871  934050 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:20:27.417893  934050 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:20:27.417949  934050 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:20:27.418024  934050 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:20:27.418035  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:20:27.418127  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:20:27.428103  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:20:27.455886  934050 start.go:296] duration metric: took 137.572557ms for postStartSetup
	I0308 03:20:27.455970  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.456239  934050 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0308 03:20:27.456264  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.459057  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.459499  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.459540  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.459707  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.459894  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.460042  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.460158  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	W0308 03:20:27.547638  934050 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0308 03:20:27.547682  934050 fix.go:56] duration metric: took 1m31.759264312s for fixHost
	I0308 03:20:27.547703  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.550352  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.550742  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.550770  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.550963  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.551153  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.551375  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.551537  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.551710  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:20:27.551887  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:20:27.551898  934050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:20:27.666992  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709868027.624546486
	
	I0308 03:20:27.667019  934050 fix.go:216] guest clock: 1709868027.624546486
	I0308 03:20:27.667026  934050 fix.go:229] Guest: 2024-03-08 03:20:27.624546486 +0000 UTC Remote: 2024-03-08 03:20:27.547690075 +0000 UTC m=+91.903693214 (delta=76.856411ms)
	I0308 03:20:27.667050  934050 fix.go:200] guest clock delta is within tolerance: 76.856411ms
	I0308 03:20:27.667057  934050 start.go:83] releasing machines lock for "ha-576225", held for 1m31.878652614s
	I0308 03:20:27.667082  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.667360  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:20:27.670055  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.670458  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.670479  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.670698  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671237  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671405  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671511  934050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:20:27.671553  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.671619  934050 ssh_runner.go:195] Run: cat /version.json
	I0308 03:20:27.671643  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.673960  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674249  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674318  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.674343  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674480  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.674668  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.674828  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.674847  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674859  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.674970  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.675039  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.675121  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.675259  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.675422  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.782462  934050 ssh_runner.go:195] Run: systemctl --version
	I0308 03:20:27.789172  934050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:20:27.960974  934050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:20:27.969958  934050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:20:27.970019  934050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:20:27.980036  934050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 03:20:27.980056  934050 start.go:494] detecting cgroup driver to use...
	I0308 03:20:27.980107  934050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:20:27.997625  934050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:20:28.012549  934050 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:20:28.012644  934050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:20:28.026785  934050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:20:28.040643  934050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:20:28.191522  934050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:20:28.342429  934050 docker.go:233] disabling docker service ...
	I0308 03:20:28.342495  934050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:20:28.360775  934050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:20:28.375036  934050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:20:28.526994  934050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:20:28.681535  934050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:20:28.697207  934050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:20:28.719206  934050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:20:28.719286  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.730970  934050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:20:28.731028  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.742085  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.753251  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.764417  934050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:20:28.775957  934050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:20:28.785920  934050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:20:28.796000  934050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:20:28.945853  934050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:20:29.244069  934050 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:20:29.244175  934050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:20:29.250363  934050 start.go:562] Will wait 60s for crictl version
	I0308 03:20:29.250426  934050 ssh_runner.go:195] Run: which crictl
	I0308 03:20:29.254967  934050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:20:29.299767  934050 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:20:29.299885  934050 ssh_runner.go:195] Run: crio --version
	I0308 03:20:29.334250  934050 ssh_runner.go:195] Run: crio --version
	I0308 03:20:29.367646  934050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:20:29.368890  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:20:29.371437  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:29.371793  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:29.371821  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:29.372008  934050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:20:29.377527  934050 kubeadm.go:877] updating cluster {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:20:29.377728  934050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:20:29.377812  934050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:20:29.424638  934050 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:20:29.424661  934050 crio.go:415] Images already preloaded, skipping extraction
	I0308 03:20:29.424722  934050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:20:29.463047  934050 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:20:29.463071  934050 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:20:29.463094  934050 kubeadm.go:928] updating node { 192.168.39.251 8443 v1.28.4 crio true true} ...
	I0308 03:20:29.463218  934050 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:20:29.463304  934050 ssh_runner.go:195] Run: crio config
	I0308 03:20:29.514810  934050 cni.go:84] Creating CNI manager for ""
	I0308 03:20:29.514839  934050 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0308 03:20:29.514853  934050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:20:29.514879  934050 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-576225 NodeName:ha-576225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:20:29.515083  934050 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-576225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:20:29.515117  934050 kube-vip.go:101] generating kube-vip config ...
	I0308 03:20:29.515228  934050 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:20:29.515286  934050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:20:29.526769  934050 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:20:29.526868  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0308 03:20:29.538517  934050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0308 03:20:29.557824  934050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:20:29.575758  934050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 03:20:29.594221  934050 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:20:29.611766  934050 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:20:29.616938  934050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:20:29.777183  934050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:20:29.862812  934050 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.251
	I0308 03:20:29.862838  934050 certs.go:194] generating shared ca certs ...
	I0308 03:20:29.862859  934050 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.863056  934050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:20:29.863117  934050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:20:29.863132  934050 certs.go:256] generating profile certs ...
	I0308 03:20:29.863236  934050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:20:29.863281  934050 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7
	I0308 03:20:29.863304  934050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.17 192.168.39.254]
	I0308 03:20:29.918862  934050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 ...
	I0308 03:20:29.918895  934050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7: {Name:mk09cb6a2e10d207415096ad10e4b87e7bf27b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.919086  934050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7 ...
	I0308 03:20:29.919103  934050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7: {Name:mkf66996a85416a2e12670d15a6b3c96e7ca62a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.919207  934050 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:20:29.919405  934050 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:20:29.919584  934050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:20:29.919603  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:20:29.919621  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:20:29.919637  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:20:29.919661  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:20:29.919688  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:20:29.919707  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:20:29.919725  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:20:29.919746  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:20:29.919826  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:20:29.919871  934050 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:20:29.919887  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:20:29.919920  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:20:29.919946  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:20:29.919969  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:20:29.920005  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:20:29.920039  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:20:29.920062  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:20:29.920080  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:29.920682  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:20:29.947672  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:20:29.979696  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:20:30.005180  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:20:30.039857  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 03:20:30.064976  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:20:30.092076  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:20:30.118146  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:20:30.145971  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:20:30.171307  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:20:30.196847  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:20:30.223296  934050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:20:30.243469  934050 ssh_runner.go:195] Run: openssl version
	I0308 03:20:30.250304  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:20:30.267867  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.273161  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.273234  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.279802  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:20:30.290653  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:20:30.303890  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.311738  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.311786  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.343013  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:20:30.353697  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:20:30.365313  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.370314  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.370358  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.376716  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:20:30.386548  934050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:20:30.391426  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 03:20:30.397438  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 03:20:30.403395  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 03:20:30.409658  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 03:20:30.415714  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 03:20:30.421675  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 03:20:30.428087  934050 kubeadm.go:391] StartCluster: {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:20:30.428195  934050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:20:30.428229  934050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:20:30.467322  934050 cri.go:89] found id: "2ddece87299a4ab5401ca03a7ee45a1fa30f45a0c84e2be85c25c65370263695"
	I0308 03:20:30.467341  934050 cri.go:89] found id: "087b18b1034c8ec0a5ae325ddf86eab41c98a172d5559d565e6a42cce60940a7"
	I0308 03:20:30.467344  934050 cri.go:89] found id: "d58e904f7b410b152ab1b98f2b1abc397aaad1e24fa604547ed0fce883eb6d49"
	I0308 03:20:30.467347  934050 cri.go:89] found id: "4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3"
	I0308 03:20:30.467350  934050 cri.go:89] found id: "6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f"
	I0308 03:20:30.467355  934050 cri.go:89] found id: "c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba"
	I0308 03:20:30.467358  934050 cri.go:89] found id: "c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788"
	I0308 03:20:30.467361  934050 cri.go:89] found id: "e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df"
	I0308 03:20:30.467365  934050 cri.go:89] found id: "da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176"
	I0308 03:20:30.467372  934050 cri.go:89] found id: "79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025"
	I0308 03:20:30.467376  934050 cri.go:89] found id: "556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2"
	I0308 03:20:30.467381  934050 cri.go:89] found id: "fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c"
	I0308 03:20:30.467387  934050 cri.go:89] found id: "77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446"
	I0308 03:20:30.467392  934050 cri.go:89] found id: ""
	I0308 03:20:30.467458  934050 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.394939954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868186394897308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df336845-62b4-4ab3-a507-aaf57b0e7184 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.395662470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc6939e8-4e9c-454f-a37f-072f9e79bc08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.395792948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc6939e8-4e9c-454f-a37f-072f9e79bc08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.396213630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc6939e8-4e9c-454f-a37f-072f9e79bc08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.448427974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=440b64bc-0c9d-47c9-91d6-c6963baca1c9 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.448530452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=440b64bc-0c9d-47c9-91d6-c6963baca1c9 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.449695165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fefb1ee-6048-4940-a851-095c89bf1296 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.450200785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868186450176062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fefb1ee-6048-4940-a851-095c89bf1296 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.450921668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1f81fb4-3e9e-4496-8ec2-6510a035b7a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.451008125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1f81fb4-3e9e-4496-8ec2-6510a035b7a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.451573035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1f81fb4-3e9e-4496-8ec2-6510a035b7a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.504626603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee41ba0e-76ae-4c97-aa5b-b7758863b0ce name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.504706372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee41ba0e-76ae-4c97-aa5b-b7758863b0ce name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.505624602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1333002-d92e-4b75-9e8e-7f81b4ae042b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.506060617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868186506037688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1333002-d92e-4b75-9e8e-7f81b4ae042b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.506627251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05391bb3-7ec4-44a4-b2bd-0ebde6eeed3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.506714851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05391bb3-7ec4-44a4-b2bd-0ebde6eeed3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.507100504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05391bb3-7ec4-44a4-b2bd-0ebde6eeed3f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.557556698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ae09cd5-b6b4-4f38-9593-f2b4073a3563 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.558395651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ae09cd5-b6b4-4f38-9593-f2b4073a3563 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.560416176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=175ace17-890a-4bea-a4a3-de7f70d470e6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.560923076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868186560899955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=175ace17-890a-4bea-a4a3-de7f70d470e6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.561774191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7186c4bb-71a7-4a9a-8ccb-83ee87d2feaf name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.561857233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7186c4bb-71a7-4a9a-8ccb-83ee87d2feaf name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:23:06 ha-576225 crio[3866]: time="2024-03-08 03:23:06.562274067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7186c4bb-71a7-4a9a-8ccb-83ee87d2feaf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	08c05f03945c6       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   5e3b38f17f036       kindnet-dxqvf
	7961f33abef9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   1b2964d418016       storage-provisioner
	e98027e15146a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   299d8fccfabec       kube-controller-manager-ha-576225
	ba2990314b56f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a28ed1ee400c9       busybox-5b5d89c9d6-9594n
	690c7f04f7df3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   bad7a444aad7c       kube-apiserver-ha-576225
	247265ee3f9ea       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   18a4467d6c1a6       kube-vip-ha-576225
	330abab8c9d77       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   8f4d0b4c36be7       kube-proxy-pcmj2
	f39e571f16421       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   1b2964d418016       storage-provisioner
	a2f20b74182ef       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   fa3754a5a1980       coredns-5dd5756b68-pqz96
	fb96559bcdaca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   7b5a7e1bf92b7       coredns-5dd5756b68-8qvhp
	32c08296db363       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   ca908871c8b99       etcd-ha-576225
	cf5e9db04d632       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   299d8fccfabec       kube-controller-manager-ha-576225
	41152db457cd3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   d5cc3aab4490e       kube-scheduler-ha-576225
	9417e2d81aaec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   bad7a444aad7c       kube-apiserver-ha-576225
	8c8be87a59f4f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   5e3b38f17f036       kindnet-dxqvf
	4b7d5042ade29       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   a6b1803470779       kube-vip-ha-576225
	c5282718f03eb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   0524f01439e2f       busybox-5b5d89c9d6-9594n
	c29d3c09ae3c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   632fde5a7793c       coredns-5dd5756b68-8qvhp
	e6551e5e70b01       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   5d9f21a723332       coredns-5dd5756b68-pqz96
	da2c9bb706201       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   9f60642cbf5af       kube-proxy-pcmj2
	79db3710d20d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago       Exited              etcd                      0                   5b9d25fbfde63       etcd-ha-576225
	77dc7f2494354       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago       Exited              kube-scheduler            0                   7a8444878ab4c       kube-scheduler-ha-576225
	
	
	==> coredns [a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38636 - 32327 "HINFO IN 498640267154758940.948528575063073994. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011032601s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788] <==
	[INFO] 10.244.0.4:54781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156597s
	[INFO] 10.244.2.2:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156855s
	[INFO] 10.244.2.2:51544 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122332s
	[INFO] 10.244.2.2:36974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001216836s
	[INFO] 10.244.2.2:46648 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079695s
	[INFO] 10.244.2.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116087s
	[INFO] 10.244.1.2:55081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181347s
	[INFO] 10.244.1.2:33288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001414035s
	[INFO] 10.244.1.2:34740 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200343s
	[INFO] 10.244.1.2:34593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089308s
	[INFO] 10.244.0.4:57556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168693s
	[INFO] 10.244.0.4:55624 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070785s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203686s
	[INFO] 10.244.2.2:38702 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143629s
	[INFO] 10.244.2.2:39439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:41980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276421s
	[INFO] 10.244.0.4:55612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118127s
	[INFO] 10.244.0.4:54270 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081257s
	[INFO] 10.244.2.2:49847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192089s
	[INFO] 10.244.2.2:45358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198525s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df] <==
	[INFO] 10.244.2.2:44074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245211s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020143s
	[INFO] 10.244.1.2:36967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124177s
	[INFO] 10.244.1.2:49099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135326s
	[INFO] 10.244.1.2:38253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253563s
	[INFO] 10.244.1.2:39140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097524s
	[INFO] 10.244.0.4:50886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066375s
	[INFO] 10.244.0.4:36001 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044745s
	[INFO] 10.244.2.2:52701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189269s
	[INFO] 10.244.1.2:56384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178001s
	[INFO] 10.244.1.2:57745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181456s
	[INFO] 10.244.1.2:36336 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125903s
	[INFO] 10.244.0.4:51847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152568s
	[INFO] 10.244.0.4:40398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222601s
	[INFO] 10.244.2.2:39215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179733s
	[INFO] 10.244.2.2:44810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018976s
	[INFO] 10.244.1.2:53930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169054s
	[INFO] 10.244.1.2:39490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132254s
	[INFO] 10.244.1.2:45653 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129104s
	[INFO] 10.244.1.2:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154053s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fb96559bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:41807 - 37908 "HINFO IN 8968042253440839441.6866134497195940646. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.10461782s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49664->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-576225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-576225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1732a5e385cf44ce86b216e3f63b18e9
	  System UUID:                1732a5e3-85cf-44ce-86b2-16e3f63b18e9
	  Boot ID:                    22459aef-7ea9-46db-b507-1fb97d6edacd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9594n             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-8qvhp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-pqz96             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-576225                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-dxqvf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-576225             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-576225    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pcmj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-576225             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-576225                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 105s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-576225 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Warning  ContainerGCFailed        2m38s (x2 over 3m38s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           92s                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	
	
	Name:               ha-576225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:10:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:23:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-576225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 852d29792aec4a87b8b6c74704738411
	  System UUID:                852d2979-2aec-4a87-b8b6-c74704738411
	  Boot ID:                    24134511-472b-4c29-ab6a-e21202d1931a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wlj7r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-576225-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-w8zww                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-576225-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-576225-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-vjfqv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-576225-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-576225-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 103s                   kube-proxy       
	  Normal  RegisteredNode           12m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  NodeNotReady             9m                     node-controller  Node ha-576225-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-576225-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-576225-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node ha-576225-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           91s                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           33s                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	
	
	Name:               ha-576225-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_12_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:12:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:22:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:22:28 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:22:28 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:22:28 +0000   Fri, 08 Mar 2024 03:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:22:28 +0000   Fri, 08 Mar 2024 03:12:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-576225-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e53bc87ed31a4387be9c7b928f4e70cd
	  System UUID:                e53bc87e-d31a-4387-be9c-7b928f4e70cd
	  Boot ID:                    94a90f9f-962d-4c0a-98ff-8fef38939a32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc27d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-576225-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-j425g                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-576225-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-576225-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-gqc9f                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-576225-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-576225-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 48s   kube-proxy       
	  Normal   RegisteredNode           11m   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal   RegisteredNode           10m   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal   RegisteredNode           92s   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal   RegisteredNode           91s   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	  Normal   Starting                 70s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  69s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  69s   kubelet          Node ha-576225-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s   kubelet          Node ha-576225-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s   kubelet          Node ha-576225-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 69s   kubelet          Node ha-576225-m03 has been rebooted, boot id: 94a90f9f-962d-4c0a-98ff-8fef38939a32
	  Normal   RegisteredNode           33s   node-controller  Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller
	
	
	Name:               ha-576225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_13_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:22:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:22:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:22:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:22:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:22:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-576225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 524efacfa67040b0afe359afd19efdd6
	  System UUID:                524efacf-a670-40b0-afe3-59afd19efdd6
	  Boot ID:                    ef4a3e89-b497-42bb-928e-511b4417aeef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5qbg6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-mk2g8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   NodeReady                9m58s              kubelet          Node ha-576225-m04 status is now: NodeReady
	  Normal   RegisteredNode           92s                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   NodeNotReady             52s                node-controller  Node ha-576225-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           33s                node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-576225-m04 has been rebooted, boot id: ef4a3e89-b497-42bb-928e-511b4417aeef
	  Normal   NodeReady                9s                 kubelet          Node ha-576225-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 8 03:09] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.056257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063726] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.163955] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153131] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264990] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.215071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060445] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.086248] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.235554] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.086526] kauditd_printk_skb: 40 callbacks suppressed
	[  +2.541733] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[ +10.298670] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.185227] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 8 03:20] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.158565] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.179387] systemd-fstab-generator[3814]: Ignoring "noauto" option for root device
	[  +0.155760] systemd-fstab-generator[3826]: Ignoring "noauto" option for root device
	[  +0.261817] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[  +0.817334] systemd-fstab-generator[3957]: Ignoring "noauto" option for root device
	[  +6.712248] kauditd_printk_skb: 132 callbacks suppressed
	[ +13.785911] kauditd_printk_skb: 83 callbacks suppressed
	[Mar 8 03:21] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1] <==
	{"level":"warn","ts":"2024-03-08T03:22:00.484412Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:03.29344Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:03.294719Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:04.486848Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.17:2380/version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:04.486959Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:08.293569Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:08.295995Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:08.489486Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.17:2380/version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:08.489589Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:12.492288Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.17:2380/version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:12.492505Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"3687119b759a7dfe","error":"Get \"https://192.168.39.17:2380/version\": dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:13.294197Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-08T03:22:13.296563Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3687119b759a7dfe","rtt":"0s","error":"dial tcp 192.168.39.17:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-08T03:22:14.615201Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:14.615396Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:14.617261Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:14.627796Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9ebeb2ab026a2136","to":"3687119b759a7dfe","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-08T03:22:14.62788Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:14.631091Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9ebeb2ab026a2136","to":"3687119b759a7dfe","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-08T03:22:14.631117Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:17.548993Z","caller":"traceutil/trace.go:171","msg":"trace[560031978] transaction","detail":"{read_only:false; response_revision:2268; number_of_response:1; }","duration":"170.527078ms","start":"2024-03-08T03:22:17.378393Z","end":"2024-03-08T03:22:17.54892Z","steps":["trace[560031978] 'process raft request'  (duration: 170.394537ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:23:03.263082Z","caller":"traceutil/trace.go:171","msg":"trace[743005755] transaction","detail":"{read_only:false; response_revision:2442; number_of_response:1; }","duration":"145.771747ms","start":"2024-03-08T03:23:03.117273Z","end":"2024-03-08T03:23:03.263045Z","steps":["trace[743005755] 'process raft request'  (duration: 145.596177ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:23:03.264143Z","caller":"traceutil/trace.go:171","msg":"trace[423671852] linearizableReadLoop","detail":"{readStateIndex:2850; appliedIndex:2851; }","duration":"128.583101ms","start":"2024-03-08T03:23:03.135534Z","end":"2024-03-08T03:23:03.264118Z","steps":["trace[423671852] 'read index received'  (duration: 128.578922ms)","trace[423671852] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T03:23:03.264546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.934025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-08T03:23:03.265552Z","caller":"traceutil/trace.go:171","msg":"trace[501016056] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2442; }","duration":"130.02849ms","start":"2024-03-08T03:23:03.135507Z","end":"2024-03-08T03:23:03.265536Z","steps":["trace[501016056] 'agreement among raft nodes before linearized reading'  (duration: 128.705642ms)"],"step_count":1}
	
	
	==> etcd [79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025] <==
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-08T03:18:56.62851Z","caller":"traceutil/trace.go:171","msg":"trace[1597261702] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"7.818788021s","start":"2024-03-08T03:18:48.809719Z","end":"2024-03-08T03:18:56.628507Z","steps":["trace[1597261702] 'agreement among raft nodes before linearized reading'  (duration: 7.811415697s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T03:18:56.62853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T03:18:48.809716Z","time spent":"7.818801743s","remote":"127.0.0.1:33834","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-08T03:18:56.772422Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.251:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:18:56.772568Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.251:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T03:18:56.772648Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9ebeb2ab026a2136","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-08T03:18:56.772929Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773048Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.7731Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773228Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773393Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.77349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773522Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773546Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.773579Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.77365Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.773941Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774087Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774255Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774299Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.777752Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.251:2380"}
	{"level":"info","ts":"2024-03-08T03:18:56.778027Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.251:2380"}
	{"level":"info","ts":"2024-03-08T03:18:56.778067Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-576225","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.251:2380"],"advertise-client-urls":["https://192.168.39.251:2379"]}
	
	
	==> kernel <==
	 03:23:07 up 14 min,  0 users,  load average: 0.32, 0.41, 0.35
	Linux ha-576225 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f] <==
	I0308 03:22:33.047640       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:22:43.060631       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:22:43.060692       1 main.go:227] handling current node
	I0308 03:22:43.060708       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:22:43.060717       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:22:43.060847       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:22:43.060887       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:22:43.060980       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:22:43.061023       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:22:53.069124       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:22:53.069256       1 main.go:227] handling current node
	I0308 03:22:53.069309       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:22:53.069472       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:22:53.069658       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:22:53.069700       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:22:53.069796       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:22:53.069825       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:23:03.084422       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:23:03.084611       1 main.go:227] handling current node
	I0308 03:23:03.084668       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:23:03.084701       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:23:03.084854       1 main.go:223] Handling node with IPs: map[192.168.39.17:{}]
	I0308 03:23:03.084887       1 main.go:250] Node ha-576225-m03 has CIDR [10.244.2.0/24] 
	I0308 03:23:03.084961       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:23:03.084979       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409] <==
	I0308 03:20:30.680517       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0308 03:20:30.680669       1 main.go:107] hostIP = 192.168.39.251
	podIP = 192.168.39.251
	I0308 03:20:30.680872       1 main.go:116] setting mtu 1500 for CNI 
	I0308 03:20:30.680914       1 main.go:146] kindnetd IP family: "ipv4"
	I0308 03:20:30.680952       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0308 03:20:34.078716       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0308 03:20:34.079207       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:35.080112       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:37.083032       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:50.092736       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37] <==
	I0308 03:21:17.217752       1 establishing_controller.go:76] Starting EstablishingController
	I0308 03:21:17.217793       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0308 03:21:17.217824       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0308 03:21:17.217859       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 03:21:17.254673       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 03:21:17.254713       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 03:21:17.350519       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:21:17.357819       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 03:21:17.358040       1 aggregator.go:166] initial CRD sync complete...
	I0308 03:21:17.358089       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 03:21:17.358097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 03:21:17.358103       1 cache.go:39] Caches are synced for autoregister controller
	I0308 03:21:17.401287       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 03:21:17.407221       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 03:21:17.407371       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 03:21:17.407280       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 03:21:17.407529       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 03:21:17.409584       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:21:17.409917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0308 03:21:17.422927       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17]
	I0308 03:21:17.424240       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 03:21:17.432309       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0308 03:21:17.437088       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0308 03:21:18.218309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 03:21:18.761776       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.251]
	
	
	==> kube-apiserver [9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4] <==
	I0308 03:20:37.619582       1 options.go:220] external host was not specified, using 192.168.39.251
	I0308 03:20:37.625566       1 server.go:148] Version: v1.28.4
	I0308 03:20:37.625623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:20:38.412562       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0308 03:20:38.416641       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0308 03:20:38.416758       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0308 03:20:38.417012       1 instance.go:298] Using reconciler: lease
	W0308 03:20:58.407689       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0308 03:20:58.412420       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0308 03:20:58.418244       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785] <==
	I0308 03:20:38.552213       1 serving.go:348] Generated self-signed cert in-memory
	I0308 03:20:38.957256       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0308 03:20:38.957446       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:20:38.959685       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0308 03:20:38.959838       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:20:38.960066       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 03:20:38.960254       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0308 03:20:59.426178       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.251:8443/healthz\": dial tcp 192.168.39.251:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7] <==
	I0308 03:21:35.352187       1 taint_manager.go:210] "Sending events to api server"
	I0308 03:21:35.353205       1 event.go:307] "Event occurred" object="ha-576225" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225 event: Registered Node ha-576225 in Controller"
	I0308 03:21:35.355617       1 event.go:307] "Event occurred" object="ha-576225-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller"
	I0308 03:21:35.416951       1 shared_informer.go:318] Caches are synced for HPA
	I0308 03:21:35.433151       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 03:21:35.474216       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 03:21:35.600679       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225"
	I0308 03:21:35.600767       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225-m02"
	I0308 03:21:35.601124       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225-m03"
	I0308 03:21:35.601243       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-576225-m04"
	I0308 03:21:35.601290       1 event.go:307] "Event occurred" object="ha-576225-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225-m03 event: Registered Node ha-576225-m03 in Controller"
	I0308 03:21:35.601300       1 event.go:307] "Event occurred" object="ha-576225-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller"
	I0308 03:21:35.601738       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-wlj7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-wlj7r"
	I0308 03:21:35.606041       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0308 03:21:35.826019       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:21:35.836701       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 03:21:35.836765       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 03:21:59.080115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.806413ms"
	I0308 03:21:59.080252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.779µs"
	I0308 03:22:15.628455       1 event.go:307] "Event occurred" object="ha-576225-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-576225-m04 status is now: NodeNotReady"
	I0308 03:22:15.647834       1 event.go:307] "Event occurred" object="kube-system/kindnet-5qbg6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:22:15.675397       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-mk2g8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:22:19.271014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.187911ms"
	I0308 03:22:19.271114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.46µs"
	I0308 03:22:58.622953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-576225-m04"
	
	
	==> kube-proxy [330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2] <==
	I0308 03:20:39.116957       1 server_others.go:69] "Using iptables proxy"
	E0308 03:20:42.143719       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:45.216004       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:48.288071       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:54.431135       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:21:03.647732       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	I0308 03:21:21.058744       1 node.go:141] Successfully retrieved node IP: 192.168.39.251
	I0308 03:21:21.106299       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:21:21.106459       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:21:21.109094       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:21:21.109204       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:21:21.109554       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:21:21.109589       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:21:21.110881       1 config.go:188] "Starting service config controller"
	I0308 03:21:21.110948       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:21:21.110974       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:21:21.111005       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:21:21.111903       1 config.go:315] "Starting node config controller"
	I0308 03:21:21.113921       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:21:21.211596       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 03:21:21.211655       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:21:21.214180       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176] <==
	E0308 03:17:33.855048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:33.855116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:33.855156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.510787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.510974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.510787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.511041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.513478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.513543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:50.110881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:50.111079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:50.111193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:50.111246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:53.182851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:53.183740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:11.616928       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:11.617108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:11.617568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:11.617713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:14.687242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:14.687444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:42.335669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:42.335769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:51.552892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:51.553049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603] <==
	W0308 03:21:08.314525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.251:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.314930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.251:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.415719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.251:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.415832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.251:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.470166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.251:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.470261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.251:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.573987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.251:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.574157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.251:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.996437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.251:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.996517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.251:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:17.267821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:21:17.267942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:21:17.268045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:21:17.268081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:21:17.268197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 03:21:17.268262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 03:21:17.268396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:21:17.268444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:21:17.268512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 03:21:17.268543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 03:21:17.268620       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:21:17.268648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 03:21:17.268762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 03:21:17.268797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0308 03:21:18.929422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446] <==
	W0308 03:18:48.770619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:18:48.770711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:18:49.106141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:18:49.106274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 03:18:49.316718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:49.316778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:49.393518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:49.393676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:49.988889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:18:49.988982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:18:50.042581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:18:50.042654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:18:50.174771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 03:18:50.174824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 03:18:50.849107       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:18:50.849160       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:18:51.687067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:51.687190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:51.704684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:18:51.704806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:18:51.746918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 03:18:51.746991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0308 03:18:56.588530       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 03:18:56.588639       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0308 03:18:56.588822       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 08 03:21:15 ha-576225 kubelet[1359]: I0308 03:21:15.935456    1359 status_manager.go:853] "Failed to get status for pod" podUID="26cdb4c7afaf223219da4d02f01a1ea4" pod="kube-system/etcd-ha-576225" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-576225\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 08 03:21:15 ha-576225 kubelet[1359]: W0308 03:21:15.935526    1359 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1742": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 08 03:21:15 ha-576225 kubelet[1359]: E0308 03:21:15.936196    1359 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1742": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 08 03:21:15 ha-576225 kubelet[1359]: I0308 03:21:15.953694    1359 scope.go:117] "RemoveContainer" containerID="8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409"
	Mar 08 03:21:15 ha-576225 kubelet[1359]: E0308 03:21:15.954132    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-dxqvf_kube-system(68b9ef4f-0693-425c-b9e5-3232abe019b1)\"" pod="kube-system/kindnet-dxqvf" podUID="68b9ef4f-0693-425c-b9e5-3232abe019b1"
	Mar 08 03:21:16 ha-576225 kubelet[1359]: I0308 03:21:16.955046    1359 scope.go:117] "RemoveContainer" containerID="f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291"
	Mar 08 03:21:16 ha-576225 kubelet[1359]: E0308 03:21:16.955347    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(73ce39c2-3ef3-4c2a-996c-47a02fd12f4e)\"" pod="kube-system/storage-provisioner" podUID="73ce39c2-3ef3-4c2a-996c-47a02fd12f4e"
	Mar 08 03:21:22 ha-576225 kubelet[1359]: I0308 03:21:22.953174    1359 scope.go:117] "RemoveContainer" containerID="cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785"
	Mar 08 03:21:27 ha-576225 kubelet[1359]: I0308 03:21:27.953478    1359 scope.go:117] "RemoveContainer" containerID="8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409"
	Mar 08 03:21:27 ha-576225 kubelet[1359]: E0308 03:21:27.953960    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-dxqvf_kube-system(68b9ef4f-0693-425c-b9e5-3232abe019b1)\"" pod="kube-system/kindnet-dxqvf" podUID="68b9ef4f-0693-425c-b9e5-3232abe019b1"
	Mar 08 03:21:28 ha-576225 kubelet[1359]: I0308 03:21:28.970510    1359 scope.go:117] "RemoveContainer" containerID="f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291"
	Mar 08 03:21:28 ha-576225 kubelet[1359]: E0308 03:21:28.970859    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(73ce39c2-3ef3-4c2a-996c-47a02fd12f4e)\"" pod="kube-system/storage-provisioner" podUID="73ce39c2-3ef3-4c2a-996c-47a02fd12f4e"
	Mar 08 03:21:29 ha-576225 kubelet[1359]: E0308 03:21:29.001441    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:21:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:21:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:21:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:21:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:21:41 ha-576225 kubelet[1359]: I0308 03:21:41.953108    1359 scope.go:117] "RemoveContainer" containerID="8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409"
	Mar 08 03:21:41 ha-576225 kubelet[1359]: I0308 03:21:41.953722    1359 scope.go:117] "RemoveContainer" containerID="f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291"
	Mar 08 03:22:13 ha-576225 kubelet[1359]: I0308 03:22:13.860401    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-9594n" podStartSLOduration=587.93385144 podCreationTimestamp="2024-03-08 03:12:25 +0000 UTC" firstStartedPulling="2024-03-08 03:12:26.402260285 +0000 UTC m=+177.641149284" lastFinishedPulling="2024-03-08 03:12:27.328627098 +0000 UTC m=+178.567516114" observedRunningTime="2024-03-08 03:12:27.872774416 +0000 UTC m=+179.111663437" watchObservedRunningTime="2024-03-08 03:22:13.86021827 +0000 UTC m=+765.099107290"
	Mar 08 03:22:29 ha-576225 kubelet[1359]: E0308 03:22:29.003500    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:22:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:23:06.044943  935122 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18333-911675/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-576225 -n ha-576225
helpers_test.go:261: (dbg) Run:  kubectl --context ha-576225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (375.58s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (141.98s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 stop -v=7 --alsologtostderr
E0308 03:23:32.257114  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:24:55.304195  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 stop -v=7 --alsologtostderr: exit status 82 (2m0.495552386s)

                                                
                                                
-- stdout --
	* Stopping node "ha-576225-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:23:26.319106  935513 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:23:26.319423  935513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:23:26.319434  935513 out.go:304] Setting ErrFile to fd 2...
	I0308 03:23:26.319438  935513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:23:26.319611  935513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:23:26.319844  935513 out.go:298] Setting JSON to false
	I0308 03:23:26.319913  935513 mustload.go:65] Loading cluster: ha-576225
	I0308 03:23:26.320261  935513 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:23:26.320340  935513 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:23:26.320513  935513 mustload.go:65] Loading cluster: ha-576225
	I0308 03:23:26.320674  935513 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:23:26.320703  935513 stop.go:39] StopHost: ha-576225-m04
	I0308 03:23:26.321107  935513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:23:26.321144  935513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:23:26.337434  935513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0308 03:23:26.337909  935513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:23:26.338551  935513 main.go:141] libmachine: Using API Version  1
	I0308 03:23:26.338584  935513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:23:26.338946  935513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:23:26.341267  935513 out.go:177] * Stopping node "ha-576225-m04"  ...
	I0308 03:23:26.342342  935513 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 03:23:26.342388  935513 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:23:26.342639  935513 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 03:23:26.342676  935513 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:23:26.345553  935513 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:23:26.346017  935513 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:22:52 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:23:26.346062  935513 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:23:26.346174  935513 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:23:26.346390  935513 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:23:26.346555  935513 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:23:26.346725  935513 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	I0308 03:23:26.427914  935513 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 03:23:26.481338  935513 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 03:23:26.535271  935513 main.go:141] libmachine: Stopping "ha-576225-m04"...
	I0308 03:23:26.535317  935513 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:23:26.537139  935513 main.go:141] libmachine: (ha-576225-m04) Calling .Stop
	I0308 03:23:26.540667  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 0/120
	I0308 03:23:27.542284  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 1/120
	I0308 03:23:28.543916  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 2/120
	I0308 03:23:29.545313  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 3/120
	I0308 03:23:30.546892  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 4/120
	I0308 03:23:31.549004  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 5/120
	I0308 03:23:32.550258  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 6/120
	I0308 03:23:33.551798  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 7/120
	I0308 03:23:34.553802  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 8/120
	I0308 03:23:35.556198  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 9/120
	I0308 03:23:36.558409  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 10/120
	I0308 03:23:37.559930  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 11/120
	I0308 03:23:38.561288  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 12/120
	I0308 03:23:39.562527  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 13/120
	I0308 03:23:40.564098  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 14/120
	I0308 03:23:41.566244  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 15/120
	I0308 03:23:42.568533  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 16/120
	I0308 03:23:43.570208  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 17/120
	I0308 03:23:44.571627  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 18/120
	I0308 03:23:45.573095  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 19/120
	I0308 03:23:46.574715  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 20/120
	I0308 03:23:47.576043  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 21/120
	I0308 03:23:48.577591  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 22/120
	I0308 03:23:49.578891  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 23/120
	I0308 03:23:50.580193  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 24/120
	I0308 03:23:51.582235  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 25/120
	I0308 03:23:52.583542  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 26/120
	I0308 03:23:53.585811  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 27/120
	I0308 03:23:54.587745  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 28/120
	I0308 03:23:55.589244  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 29/120
	I0308 03:23:56.591565  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 30/120
	I0308 03:23:57.592996  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 31/120
	I0308 03:23:58.595212  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 32/120
	I0308 03:23:59.596521  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 33/120
	I0308 03:24:00.597903  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 34/120
	I0308 03:24:01.599694  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 35/120
	I0308 03:24:02.601104  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 36/120
	I0308 03:24:03.602400  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 37/120
	I0308 03:24:04.604301  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 38/120
	I0308 03:24:05.605547  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 39/120
	I0308 03:24:06.607115  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 40/120
	I0308 03:24:07.608588  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 41/120
	I0308 03:24:08.610241  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 42/120
	I0308 03:24:09.611388  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 43/120
	I0308 03:24:10.612607  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 44/120
	I0308 03:24:11.614516  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 45/120
	I0308 03:24:12.616198  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 46/120
	I0308 03:24:13.617383  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 47/120
	I0308 03:24:14.618544  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 48/120
	I0308 03:24:15.619929  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 49/120
	I0308 03:24:16.622124  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 50/120
	I0308 03:24:17.623845  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 51/120
	I0308 03:24:18.625074  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 52/120
	I0308 03:24:19.626416  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 53/120
	I0308 03:24:20.627661  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 54/120
	I0308 03:24:21.629605  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 55/120
	I0308 03:24:22.631912  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 56/120
	I0308 03:24:23.633241  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 57/120
	I0308 03:24:24.634614  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 58/120
	I0308 03:24:25.635965  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 59/120
	I0308 03:24:26.638070  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 60/120
	I0308 03:24:27.639752  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 61/120
	I0308 03:24:28.641032  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 62/120
	I0308 03:24:29.642867  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 63/120
	I0308 03:24:30.644378  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 64/120
	I0308 03:24:31.646513  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 65/120
	I0308 03:24:32.648278  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 66/120
	I0308 03:24:33.649745  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 67/120
	I0308 03:24:34.651764  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 68/120
	I0308 03:24:35.653138  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 69/120
	I0308 03:24:36.655273  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 70/120
	I0308 03:24:37.656594  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 71/120
	I0308 03:24:38.657983  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 72/120
	I0308 03:24:39.659774  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 73/120
	I0308 03:24:40.661106  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 74/120
	I0308 03:24:41.662968  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 75/120
	I0308 03:24:42.664298  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 76/120
	I0308 03:24:43.665669  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 77/120
	I0308 03:24:44.667635  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 78/120
	I0308 03:24:45.669336  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 79/120
	I0308 03:24:46.671978  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 80/120
	I0308 03:24:47.673236  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 81/120
	I0308 03:24:48.674657  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 82/120
	I0308 03:24:49.676657  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 83/120
	I0308 03:24:50.678183  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 84/120
	I0308 03:24:51.680163  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 85/120
	I0308 03:24:52.681470  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 86/120
	I0308 03:24:53.683753  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 87/120
	I0308 03:24:54.685157  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 88/120
	I0308 03:24:55.686846  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 89/120
	I0308 03:24:56.688732  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 90/120
	I0308 03:24:57.689944  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 91/120
	I0308 03:24:58.691340  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 92/120
	I0308 03:24:59.692584  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 93/120
	I0308 03:25:00.694672  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 94/120
	I0308 03:25:01.696802  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 95/120
	I0308 03:25:02.698252  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 96/120
	I0308 03:25:03.700649  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 97/120
	I0308 03:25:04.701967  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 98/120
	I0308 03:25:05.704134  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 99/120
	I0308 03:25:06.706010  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 100/120
	I0308 03:25:07.707266  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 101/120
	I0308 03:25:08.708750  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 102/120
	I0308 03:25:09.710305  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 103/120
	I0308 03:25:10.711812  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 104/120
	I0308 03:25:11.713759  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 105/120
	I0308 03:25:12.715054  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 106/120
	I0308 03:25:13.716472  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 107/120
	I0308 03:25:14.717875  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 108/120
	I0308 03:25:15.719629  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 109/120
	I0308 03:25:16.721858  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 110/120
	I0308 03:25:17.723881  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 111/120
	I0308 03:25:18.725233  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 112/120
	I0308 03:25:19.726688  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 113/120
	I0308 03:25:20.728032  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 114/120
	I0308 03:25:21.729369  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 115/120
	I0308 03:25:22.730701  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 116/120
	I0308 03:25:23.732802  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 117/120
	I0308 03:25:24.734248  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 118/120
	I0308 03:25:25.735650  935513 main.go:141] libmachine: (ha-576225-m04) Waiting for machine to stop 119/120
	I0308 03:25:26.737062  935513 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 03:25:26.737160  935513 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0308 03:25:26.739004  935513 out.go:177] 
	W0308 03:25:26.740332  935513 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0308 03:25:26.740349  935513 out.go:239] * 
	* 
	W0308 03:25:26.747205  935513 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 03:25:26.748609  935513 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-576225 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr: exit status 3 (18.899537955s)

                                                
                                                
-- stdout --
	ha-576225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-576225-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:25:26.812071  935817 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:25:26.812204  935817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:25:26.812213  935817 out.go:304] Setting ErrFile to fd 2...
	I0308 03:25:26.812217  935817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:25:26.812421  935817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:25:26.812589  935817 out.go:298] Setting JSON to false
	I0308 03:25:26.812617  935817 mustload.go:65] Loading cluster: ha-576225
	I0308 03:25:26.812676  935817 notify.go:220] Checking for updates...
	I0308 03:25:26.812977  935817 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:25:26.812994  935817 status.go:255] checking status of ha-576225 ...
	I0308 03:25:26.813447  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:26.813546  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:26.835380  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0308 03:25:26.835849  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:26.836558  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:26.836598  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:26.837067  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:26.837318  935817 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:25:26.839230  935817 status.go:330] ha-576225 host status = "Running" (err=<nil>)
	I0308 03:25:26.839249  935817 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:25:26.839617  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:26.839665  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:26.854035  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0308 03:25:26.854369  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:26.854746  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:26.854811  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:26.855126  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:26.855331  935817 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:25:26.858119  935817 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:25:26.858547  935817 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:25:26.858566  935817 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:25:26.858711  935817 host.go:66] Checking if "ha-576225" exists ...
	I0308 03:25:26.859008  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:26.859042  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:26.873045  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0308 03:25:26.873440  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:26.873907  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:26.873928  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:26.874244  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:26.874423  935817 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:25:26.874630  935817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:25:26.874664  935817 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:25:26.877472  935817 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:25:26.877913  935817 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:25:26.877940  935817 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:25:26.878049  935817 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:25:26.878212  935817 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:25:26.878448  935817 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:25:26.878574  935817 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:25:26.967967  935817 ssh_runner.go:195] Run: systemctl --version
	I0308 03:25:26.977072  935817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:25:26.995520  935817 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:25:26.995546  935817 api_server.go:166] Checking apiserver status ...
	I0308 03:25:26.995591  935817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:25:27.019116  935817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4960/cgroup
	W0308 03:25:27.032403  935817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:25:27.032456  935817 ssh_runner.go:195] Run: ls
	I0308 03:25:27.037672  935817 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:25:27.044537  935817 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:25:27.044560  935817 status.go:422] ha-576225 apiserver status = Running (err=<nil>)
	I0308 03:25:27.044574  935817 status.go:257] ha-576225 status: &{Name:ha-576225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:25:27.044602  935817 status.go:255] checking status of ha-576225-m02 ...
	I0308 03:25:27.044915  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.044960  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.060102  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0308 03:25:27.060547  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.061058  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.061080  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.061457  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.061647  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetState
	I0308 03:25:27.063119  935817 status.go:330] ha-576225-m02 host status = "Running" (err=<nil>)
	I0308 03:25:27.063139  935817 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:25:27.063403  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.063438  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.077841  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0308 03:25:27.078250  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.078761  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.078784  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.079087  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.079323  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetIP
	I0308 03:25:27.081908  935817 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:25:27.082338  935817 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:20:42 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:25:27.082367  935817 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:25:27.082472  935817 host.go:66] Checking if "ha-576225-m02" exists ...
	I0308 03:25:27.082866  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.082921  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.097358  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0308 03:25:27.097783  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.098253  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.098275  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.098647  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.098854  935817 main.go:141] libmachine: (ha-576225-m02) Calling .DriverName
	I0308 03:25:27.099031  935817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:25:27.099064  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHHostname
	I0308 03:25:27.101557  935817 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:25:27.101975  935817 main.go:141] libmachine: (ha-576225-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:93:a0", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:20:42 +0000 UTC Type:0 Mac:52:54:00:13:93:a0 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:ha-576225-m02 Clientid:01:52:54:00:13:93:a0}
	I0308 03:25:27.102019  935817 main.go:141] libmachine: (ha-576225-m02) DBG | domain ha-576225-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:13:93:a0 in network mk-ha-576225
	I0308 03:25:27.102112  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHPort
	I0308 03:25:27.102288  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHKeyPath
	I0308 03:25:27.102418  935817 main.go:141] libmachine: (ha-576225-m02) Calling .GetSSHUsername
	I0308 03:25:27.102554  935817 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m02/id_rsa Username:docker}
	I0308 03:25:27.186830  935817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:25:27.205207  935817 kubeconfig.go:125] found "ha-576225" server: "https://192.168.39.254:8443"
	I0308 03:25:27.205242  935817 api_server.go:166] Checking apiserver status ...
	I0308 03:25:27.205308  935817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:25:27.223552  935817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	W0308 03:25:27.234361  935817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:25:27.234432  935817 ssh_runner.go:195] Run: ls
	I0308 03:25:27.239188  935817 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0308 03:25:27.245935  935817 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0308 03:25:27.245964  935817 status.go:422] ha-576225-m02 apiserver status = Running (err=<nil>)
	I0308 03:25:27.245978  935817 status.go:257] ha-576225-m02 status: &{Name:ha-576225-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:25:27.246014  935817 status.go:255] checking status of ha-576225-m04 ...
	I0308 03:25:27.246304  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.246338  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.261496  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0308 03:25:27.261942  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.262451  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.262478  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.262920  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.263148  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetState
	I0308 03:25:27.264787  935817 status.go:330] ha-576225-m04 host status = "Running" (err=<nil>)
	I0308 03:25:27.264806  935817 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:25:27.265086  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.265123  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.280249  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0308 03:25:27.280628  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.281100  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.281135  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.281504  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.281716  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetIP
	I0308 03:25:27.284612  935817 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:25:27.285056  935817 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:22:52 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:25:27.285085  935817 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:25:27.285212  935817 host.go:66] Checking if "ha-576225-m04" exists ...
	I0308 03:25:27.285578  935817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:25:27.285624  935817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:25:27.299924  935817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39329
	I0308 03:25:27.300357  935817 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:25:27.300810  935817 main.go:141] libmachine: Using API Version  1
	I0308 03:25:27.300837  935817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:25:27.301134  935817 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:25:27.301326  935817 main.go:141] libmachine: (ha-576225-m04) Calling .DriverName
	I0308 03:25:27.301475  935817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:25:27.301493  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHHostname
	I0308 03:25:27.304017  935817 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:25:27.304456  935817 main.go:141] libmachine: (ha-576225-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:99:43", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:22:52 +0000 UTC Type:0 Mac:52:54:00:66:99:43 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-576225-m04 Clientid:01:52:54:00:66:99:43}
	I0308 03:25:27.304485  935817 main.go:141] libmachine: (ha-576225-m04) DBG | domain ha-576225-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:66:99:43 in network mk-ha-576225
	I0308 03:25:27.304624  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHPort
	I0308 03:25:27.304783  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHKeyPath
	I0308 03:25:27.304973  935817 main.go:141] libmachine: (ha-576225-m04) Calling .GetSSHUsername
	I0308 03:25:27.305090  935817 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225-m04/id_rsa Username:docker}
	W0308 03:25:45.649478  935817 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.68:22: connect: no route to host
	W0308 03:25:45.649618  935817 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	E0308 03:25:45.649639  935817 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host
	I0308 03:25:45.649654  935817 status.go:257] ha-576225-m04 status: &{Name:ha-576225-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0308 03:25:45.649676  935817 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.68:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-576225 -n ha-576225
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-576225 logs -n 25: (1.911387775s)
helpers_test.go:252: TestMutliControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m04 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp testdata/cp-test.txt                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225:/home/docker/cp-test_ha-576225-m04_ha-576225.txt                       |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225 sudo cat                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225.txt                                 |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m02:/home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m02 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m03:/home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n                                                                 | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | ha-576225-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-576225 ssh -n ha-576225-m03 sudo cat                                          | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC | 08 Mar 24 03:13 UTC |
	|         | /home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-576225 node stop m02 -v=7                                                     | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-576225 node start m02 -v=7                                                    | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-576225 -v=7                                                           | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-576225 -v=7                                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-576225 --wait=true -v=7                                                    | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:18 UTC | 08 Mar 24 03:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-576225                                                                | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:23 UTC |                     |
	| node    | ha-576225 node delete m03 -v=7                                                   | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:23 UTC | 08 Mar 24 03:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-576225 stop -v=7                                                              | ha-576225 | jenkins | v1.32.0 | 08 Mar 24 03:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:18:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:18:55.693590  934050 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:18:55.694085  934050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:18:55.694105  934050 out.go:304] Setting ErrFile to fd 2...
	I0308 03:18:55.694112  934050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:18:55.694605  934050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:18:55.695841  934050 out.go:298] Setting JSON to false
	I0308 03:18:55.696834  934050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":25262,"bootTime":1709842674,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:18:55.696916  934050 start.go:139] virtualization: kvm guest
	I0308 03:18:55.698848  934050 out.go:177] * [ha-576225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:18:55.700650  934050 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:18:55.702081  934050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:18:55.700714  934050 notify.go:220] Checking for updates...
	I0308 03:18:55.704768  934050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:18:55.706228  934050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:18:55.707640  934050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:18:55.708975  934050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:18:55.710669  934050 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:18:55.710765  934050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:18:55.711179  934050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:18:55.711224  934050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:18:55.727843  934050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0308 03:18:55.728263  934050 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:18:55.728810  934050 main.go:141] libmachine: Using API Version  1
	I0308 03:18:55.728834  934050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:18:55.729228  934050 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:18:55.729449  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.766456  934050 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:18:55.767796  934050 start.go:297] selected driver: kvm2
	I0308 03:18:55.767809  934050 start.go:901] validating driver "kvm2" against &{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:18:55.767962  934050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:18:55.768320  934050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:18:55.768413  934050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:18:55.783843  934050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:18:55.784480  934050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:18:55.784553  934050 cni.go:84] Creating CNI manager for ""
	I0308 03:18:55.784564  934050 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0308 03:18:55.784632  934050 start.go:340] cluster config:
	{Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:18:55.784781  934050 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:18:55.786541  934050 out.go:177] * Starting "ha-576225" primary control-plane node in "ha-576225" cluster
	I0308 03:18:55.787925  934050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:18:55.787958  934050 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:18:55.787969  934050 cache.go:56] Caching tarball of preloaded images
	I0308 03:18:55.788045  934050 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:18:55.788057  934050 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:18:55.788172  934050 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/config.json ...
	I0308 03:18:55.788351  934050 start.go:360] acquireMachinesLock for ha-576225: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:18:55.788395  934050 start.go:364] duration metric: took 26.174µs to acquireMachinesLock for "ha-576225"
	I0308 03:18:55.788410  934050 start.go:96] Skipping create...Using existing machine configuration
	I0308 03:18:55.788418  934050 fix.go:54] fixHost starting: 
	I0308 03:18:55.788665  934050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:18:55.788695  934050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:18:55.803299  934050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0308 03:18:55.803741  934050 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:18:55.804198  934050 main.go:141] libmachine: Using API Version  1
	I0308 03:18:55.804220  934050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:18:55.804535  934050 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:18:55.804749  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.804881  934050 main.go:141] libmachine: (ha-576225) Calling .GetState
	I0308 03:18:55.806557  934050 fix.go:112] recreateIfNeeded on ha-576225: state=Running err=<nil>
	W0308 03:18:55.806579  934050 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 03:18:55.808259  934050 out.go:177] * Updating the running kvm2 "ha-576225" VM ...
	I0308 03:18:55.809487  934050 machine.go:94] provisionDockerMachine start ...
	I0308 03:18:55.809508  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:18:55.809729  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:55.812039  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.812501  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:55.812527  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.812668  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:55.812832  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.812975  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.813124  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:55.813315  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:55.813500  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:55.813512  934050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 03:18:55.936168  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:18:55.936204  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:55.936458  934050 buildroot.go:166] provisioning hostname "ha-576225"
	I0308 03:18:55.936487  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:55.936709  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:55.939467  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.939922  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:55.939953  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:55.940054  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:55.940236  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.940387  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:55.940547  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:55.940794  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:55.940984  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:55.940996  934050 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-576225 && echo "ha-576225" | sudo tee /etc/hostname
	I0308 03:18:56.076036  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-576225
	
	I0308 03:18:56.076077  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.078815  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.079249  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.079273  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.079455  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.079669  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.079824  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.079961  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.080106  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:56.080285  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:56.080317  934050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-576225' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-576225/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-576225' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:18:56.198665  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:18:56.198719  934050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:18:56.198736  934050 buildroot.go:174] setting up certificates
	I0308 03:18:56.198746  934050 provision.go:84] configureAuth start
	I0308 03:18:56.198754  934050 main.go:141] libmachine: (ha-576225) Calling .GetMachineName
	I0308 03:18:56.199059  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:18:56.201938  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.202357  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.202383  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.202555  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.205072  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.205412  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.205446  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.205564  934050 provision.go:143] copyHostCerts
	I0308 03:18:56.205616  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:18:56.205662  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:18:56.205675  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:18:56.205768  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:18:56.205883  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:18:56.205910  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:18:56.205917  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:18:56.205957  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:18:56.206034  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:18:56.206059  934050 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:18:56.206068  934050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:18:56.206099  934050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:18:56.206187  934050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.ha-576225 san=[127.0.0.1 192.168.39.251 ha-576225 localhost minikube]
	I0308 03:18:56.295338  934050 provision.go:177] copyRemoteCerts
	I0308 03:18:56.295399  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:18:56.295429  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.297940  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.298258  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.298290  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.298420  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.298612  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.298793  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.298926  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:18:56.389721  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:18:56.389790  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:18:56.419979  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:18:56.420044  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0308 03:18:56.447385  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:18:56.447438  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:18:56.474531  934050 provision.go:87] duration metric: took 275.770203ms to configureAuth
	I0308 03:18:56.474558  934050 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:18:56.474768  934050 config.go:182] Loaded profile config "ha-576225": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:18:56.474845  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:18:56.477520  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.477839  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:18:56.477863  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:18:56.478024  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:18:56.478218  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.478362  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:18:56.478483  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:18:56.478645  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:18:56.478860  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:18:56.478887  934050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:20:27.318236  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:20:27.318272  934050 machine.go:97] duration metric: took 1m31.5087671s to provisionDockerMachine
	I0308 03:20:27.318288  934050 start.go:293] postStartSetup for "ha-576225" (driver="kvm2")
	I0308 03:20:27.318300  934050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:20:27.318336  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.318757  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:20:27.318789  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.321952  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.322409  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.322439  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.322609  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.322809  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.322966  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.323108  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.413061  934050 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:20:27.417871  934050 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:20:27.417893  934050 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:20:27.417949  934050 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:20:27.418024  934050 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:20:27.418035  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:20:27.418127  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:20:27.428103  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:20:27.455886  934050 start.go:296] duration metric: took 137.572557ms for postStartSetup
	I0308 03:20:27.455970  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.456239  934050 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0308 03:20:27.456264  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.459057  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.459499  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.459540  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.459707  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.459894  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.460042  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.460158  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	W0308 03:20:27.547638  934050 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0308 03:20:27.547682  934050 fix.go:56] duration metric: took 1m31.759264312s for fixHost
	I0308 03:20:27.547703  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.550352  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.550742  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.550770  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.550963  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.551153  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.551375  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.551537  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.551710  934050 main.go:141] libmachine: Using SSH client type: native
	I0308 03:20:27.551887  934050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0308 03:20:27.551898  934050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:20:27.666992  934050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709868027.624546486
	
	I0308 03:20:27.667019  934050 fix.go:216] guest clock: 1709868027.624546486
	I0308 03:20:27.667026  934050 fix.go:229] Guest: 2024-03-08 03:20:27.624546486 +0000 UTC Remote: 2024-03-08 03:20:27.547690075 +0000 UTC m=+91.903693214 (delta=76.856411ms)
	I0308 03:20:27.667050  934050 fix.go:200] guest clock delta is within tolerance: 76.856411ms
	I0308 03:20:27.667057  934050 start.go:83] releasing machines lock for "ha-576225", held for 1m31.878652614s
	I0308 03:20:27.667082  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.667360  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:20:27.670055  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.670458  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.670479  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.670698  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671237  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671405  934050 main.go:141] libmachine: (ha-576225) Calling .DriverName
	I0308 03:20:27.671511  934050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:20:27.671553  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.671619  934050 ssh_runner.go:195] Run: cat /version.json
	I0308 03:20:27.671643  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHHostname
	I0308 03:20:27.673960  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674249  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674318  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.674343  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674480  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.674668  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.674828  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:27.674847  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:27.674859  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.674970  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHPort
	I0308 03:20:27.675039  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.675121  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHKeyPath
	I0308 03:20:27.675259  934050 main.go:141] libmachine: (ha-576225) Calling .GetSSHUsername
	I0308 03:20:27.675422  934050 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/ha-576225/id_rsa Username:docker}
	I0308 03:20:27.782462  934050 ssh_runner.go:195] Run: systemctl --version
	I0308 03:20:27.789172  934050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:20:27.960974  934050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:20:27.969958  934050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:20:27.970019  934050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:20:27.980036  934050 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 03:20:27.980056  934050 start.go:494] detecting cgroup driver to use...
	I0308 03:20:27.980107  934050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:20:27.997625  934050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:20:28.012549  934050 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:20:28.012644  934050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:20:28.026785  934050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:20:28.040643  934050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:20:28.191522  934050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:20:28.342429  934050 docker.go:233] disabling docker service ...
	I0308 03:20:28.342495  934050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:20:28.360775  934050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:20:28.375036  934050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:20:28.526994  934050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:20:28.681535  934050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:20:28.697207  934050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:20:28.719206  934050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:20:28.719286  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.730970  934050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:20:28.731028  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.742085  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.753251  934050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:20:28.764417  934050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:20:28.775957  934050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:20:28.785920  934050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:20:28.796000  934050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:20:28.945853  934050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:20:29.244069  934050 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:20:29.244175  934050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:20:29.250363  934050 start.go:562] Will wait 60s for crictl version
	I0308 03:20:29.250426  934050 ssh_runner.go:195] Run: which crictl
	I0308 03:20:29.254967  934050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:20:29.299767  934050 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:20:29.299885  934050 ssh_runner.go:195] Run: crio --version
	I0308 03:20:29.334250  934050 ssh_runner.go:195] Run: crio --version
	I0308 03:20:29.367646  934050 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:20:29.368890  934050 main.go:141] libmachine: (ha-576225) Calling .GetIP
	I0308 03:20:29.371437  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:29.371793  934050 main.go:141] libmachine: (ha-576225) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:24:e8", ip: ""} in network mk-ha-576225: {Iface:virbr1 ExpiryTime:2024-03-08 04:08:55 +0000 UTC Type:0 Mac:52:54:00:53:24:e8 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-576225 Clientid:01:52:54:00:53:24:e8}
	I0308 03:20:29.371821  934050 main.go:141] libmachine: (ha-576225) DBG | domain ha-576225 has defined IP address 192.168.39.251 and MAC address 52:54:00:53:24:e8 in network mk-ha-576225
	I0308 03:20:29.372008  934050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:20:29.377527  934050 kubeadm.go:877] updating cluster {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:20:29.377728  934050 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:20:29.377812  934050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:20:29.424638  934050 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:20:29.424661  934050 crio.go:415] Images already preloaded, skipping extraction
	I0308 03:20:29.424722  934050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:20:29.463047  934050 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:20:29.463071  934050 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:20:29.463094  934050 kubeadm.go:928] updating node { 192.168.39.251 8443 v1.28.4 crio true true} ...
	I0308 03:20:29.463218  934050 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-576225 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:20:29.463304  934050 ssh_runner.go:195] Run: crio config
	I0308 03:20:29.514810  934050 cni.go:84] Creating CNI manager for ""
	I0308 03:20:29.514839  934050 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0308 03:20:29.514853  934050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:20:29.514879  934050 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.251 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-576225 NodeName:ha-576225 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.251"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.251 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:20:29.515083  934050 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.251
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-576225"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.251
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.251"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:20:29.515117  934050 kube-vip.go:101] generating kube-vip config ...
	I0308 03:20:29.515228  934050 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0308 03:20:29.515286  934050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:20:29.526769  934050 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:20:29.526868  934050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0308 03:20:29.538517  934050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0308 03:20:29.557824  934050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:20:29.575758  934050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 03:20:29.594221  934050 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1263 bytes)
	I0308 03:20:29.611766  934050 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0308 03:20:29.616938  934050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:20:29.777183  934050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:20:29.862812  934050 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225 for IP: 192.168.39.251
	I0308 03:20:29.862838  934050 certs.go:194] generating shared ca certs ...
	I0308 03:20:29.862859  934050 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.863056  934050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:20:29.863117  934050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:20:29.863132  934050 certs.go:256] generating profile certs ...
	I0308 03:20:29.863236  934050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/client.key
	I0308 03:20:29.863281  934050 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7
	I0308 03:20:29.863304  934050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.251 192.168.39.128 192.168.39.17 192.168.39.254]
	I0308 03:20:29.918862  934050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 ...
	I0308 03:20:29.918895  934050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7: {Name:mk09cb6a2e10d207415096ad10e4b87e7bf27b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.919086  934050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7 ...
	I0308 03:20:29.919103  934050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7: {Name:mkf66996a85416a2e12670d15a6b3c96e7ca62a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:20:29.919207  934050 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt.0f4c02d7 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt
	I0308 03:20:29.919405  934050 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key.0f4c02d7 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key
	I0308 03:20:29.919584  934050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key
	I0308 03:20:29.919603  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:20:29.919621  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:20:29.919637  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:20:29.919661  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:20:29.919688  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:20:29.919707  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:20:29.919725  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:20:29.919746  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:20:29.919826  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:20:29.919871  934050 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:20:29.919887  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:20:29.919920  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:20:29.919946  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:20:29.919969  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:20:29.920005  934050 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:20:29.920039  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:20:29.920062  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:20:29.920080  934050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:29.920682  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:20:29.947672  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:20:29.979696  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:20:30.005180  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:20:30.039857  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 03:20:30.064976  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:20:30.092076  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:20:30.118146  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/ha-576225/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:20:30.145971  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:20:30.171307  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:20:30.196847  934050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:20:30.223296  934050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:20:30.243469  934050 ssh_runner.go:195] Run: openssl version
	I0308 03:20:30.250304  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:20:30.267867  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.273161  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.273234  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:20:30.279802  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:20:30.290653  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:20:30.303890  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.311738  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.311786  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:20:30.343013  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:20:30.353697  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:20:30.365313  934050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.370314  934050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.370358  934050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:20:30.376716  934050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:20:30.386548  934050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:20:30.391426  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 03:20:30.397438  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 03:20:30.403395  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 03:20:30.409658  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 03:20:30.415714  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 03:20:30.421675  934050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 03:20:30.428087  934050 kubeadm.go:391] StartCluster: {Name:ha-576225 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-576225 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.251 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.17 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:20:30.428195  934050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:20:30.428229  934050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:20:30.467322  934050 cri.go:89] found id: "2ddece87299a4ab5401ca03a7ee45a1fa30f45a0c84e2be85c25c65370263695"
	I0308 03:20:30.467341  934050 cri.go:89] found id: "087b18b1034c8ec0a5ae325ddf86eab41c98a172d5559d565e6a42cce60940a7"
	I0308 03:20:30.467344  934050 cri.go:89] found id: "d58e904f7b410b152ab1b98f2b1abc397aaad1e24fa604547ed0fce883eb6d49"
	I0308 03:20:30.467347  934050 cri.go:89] found id: "4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3"
	I0308 03:20:30.467350  934050 cri.go:89] found id: "6dcd572cdc4caa0abffa88b83722ba9894bf4d17a67aeeaace23b5c22137c22f"
	I0308 03:20:30.467355  934050 cri.go:89] found id: "c751323fea4d935d98480f4b087704662a531c6182f4b1fb5df20096e01ee3ba"
	I0308 03:20:30.467358  934050 cri.go:89] found id: "c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788"
	I0308 03:20:30.467361  934050 cri.go:89] found id: "e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df"
	I0308 03:20:30.467365  934050 cri.go:89] found id: "da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176"
	I0308 03:20:30.467372  934050 cri.go:89] found id: "79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025"
	I0308 03:20:30.467376  934050 cri.go:89] found id: "556a4677df889b6eb04747a13b5839e83228e63f48d261ad42c84556f2ecf6d2"
	I0308 03:20:30.467381  934050 cri.go:89] found id: "fe007de6550daad402392f2cda0741b09d63d85f534309fb961e892e55cbc34c"
	I0308 03:20:30.467387  934050 cri.go:89] found id: "77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446"
	I0308 03:20:30.467392  934050 cri.go:89] found id: ""
	I0308 03:20:30.467458  934050 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.314283280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48161ef7-01e2-481d-b8c9-99089e31bd41 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.314746049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fec17bb3-e4f3-4814-a723-e51618d1a653 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.315455559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fec17bb3-e4f3-4814-a723-e51618d1a653 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.315837778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fec17bb3-e4f3-4814-a723-e51618d1a653 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.370852816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=466deb1f-396a-43a3-bf19-d07e96b6395e name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.370923873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=466deb1f-396a-43a3-bf19-d07e96b6395e name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.372051808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47884fbd-4bd0-4900-ae79-884dd5406a24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.372561335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868346372536632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47884fbd-4bd0-4900-ae79-884dd5406a24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.373616351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7a68964-dfe2-4f30-920b-8d84777c1674 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.373681172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7a68964-dfe2-4f30-920b-8d84777c1674 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.374090341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7a68964-dfe2-4f30-920b-8d84777c1674 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.423182601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f169536e-3c4e-4e85-81f6-6002283d38a9 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.423278694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f169536e-3c4e-4e85-81f6-6002283d38a9 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.426631593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db882fd5-8d54-4fd8-8c20-c47fc6827c66 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.428061705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868346428034103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db882fd5-8d54-4fd8-8c20-c47fc6827c66 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.429179668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b9a8596-a3ac-490b-924d-e174ce411c01 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.429262795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b9a8596-a3ac-490b-924d-e174ce411c01 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.430133309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b9a8596-a3ac-490b-924d-e174ce411c01 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.484017385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50cfdee0-4897-4acd-9255-8bea086186fa name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.484116689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50cfdee0-4897-4acd-9255-8bea086186fa name=/runtime.v1.RuntimeService/Version
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.485479604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7e9ffc0-32f1-4998-90f0-9651aa54cf9b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.486190776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709868346486165686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7e9ffc0-32f1-4998-90f0-9651aa54cf9b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.487153798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c09b91aa-da8e-41e8-9a0d-f7865397df61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.487284139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c09b91aa-da8e-41e8-9a0d-f7865397df61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:25:46 ha-576225 crio[3866]: time="2024-03-08 03:25:46.487747396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709868102016309498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7a331848,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7961f33abef9eb0139f1ced7f45849e3bfe847b93fc486dda47e872aa0770847,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709868101998066150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709868082977889143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2990314b56fb75407ba67d7697d42f81c1dca4f85220ae4ea5b5e942610f36,PodSandboxId:a28ed1ee400c976408171028deba5905253d6f943b3d3c2e28d16b5dbb7109f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709868070293638925,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709868069033734645,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotations:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:247265ee3f9eaf2120e79d7055da571490fcd3309a9ded78a24de68f9d1c3792,PodSandboxId:18a4467d6c1a68986fd32e4820e69f276e0c8756f0f8f97567fa02cd61d0ef81,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1709868038523671925,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291,PodSandboxId:1b2964d4180160fac3c1994b6d8a1f2fe72fa4594100d09c0de7b20f985ff598,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709868037719883883,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ce39c2-3ef3-4c2a-996c-47a02fd12f4e,},Annotations:map[string]string{io.kubernetes.container.hash: ffbe05f8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2,PodSandboxId:8f4d0b4c36be7880ca6008b11622fd394988729ba50e3b1f06d3a7c646252665,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709868038339255399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb96559
bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84,PodSandboxId:7b5a7e1bf92b71c6639f915db2e3c983a0ecc36d545fc70b1977ec4df59f0e6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037091797572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1,PodSandboxId:ca908871c8b994bbec4e0ed1277b264a3880fc13c034fc78090e8c66868f312e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709868036911305738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d,PodSandboxId:fa3754a5a19804fefd91532d46875dcf0cdb49a30d1ba39a200878135a616ee1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709868037094580208,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603,PodSandboxId:d5cc3aab4490e68e1cb10738a1ed0408d054092d67e035acb51dcac66d7162c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709868036837131595,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619
bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4,PodSandboxId:bad7a444aad7cab7dad05d8905e626815aaf4d6af7ad9e3d34a894864ac77664,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709868036791879375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb9fc89b7fdb50461eab2dcf2451250e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ab23cc1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785,PodSandboxId:299d8fccfabecc7cabccd975eb819fe2506518aba7c6fbaf9615d6ebda779e58,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709868036850799649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43f1b4602f1b00b137428ffec94b74a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409,PodSandboxId:5e3b38f17f0364a23480df430b769983b370c93b9ea9ff21407aadb2ade9b4b4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709868030244991726,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dxqvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b9ef4f-0693-425c-b9e5-3232abe019b1,},Annotations:map[string]string{io.kuber
netes.container.hash: 7a331848,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7d5042ade2945259e33973dc7277a1844871e426cedc195a4fa355e33a51e3,PodSandboxId:a6b1803470779e8bd2d4b90a5eeee40b3c00c70ca9e38062918c05a931405cfa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1709867837977051544,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79332678c9cff5037e42e087635740e0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c47fc8f,io.kubern
etes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5282718f03eb59823c4690e236f22b4c732b8dfed00bfdbba631df1d083cfb9,PodSandboxId:0524f01439e2fe09d37fec7b532871c7f4aa109fb336a816632d23e4b7cbb7e6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709867547347087139,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-9594n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8bc0fba-1a5c-4082-a505-a0653c59180a,},Annotations:map[string]string{io.kubernetes.container.hash: b6393d7d,io.kuberne
tes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788,PodSandboxId:632fde5a7793c4f1b3894fcd3e78971eeae5cd4a118a1642f938024e2744edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383283543193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8qvhp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7686e8de-1f0a-4952-822a-22e888b17da3,},Annotations:map[string]string{io.kubernetes.container.hash: 409abd6,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df,PodSandboxId:5d9f21a723332d85da1922c32d196f1a0a935fad6ca87bca657aa509004bc355,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709867383257803509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pqz96,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: e2bf0fdf-7908-4600-8e88-7496688efb0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9b549360,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176,PodSandboxId:9f60642cbf5afb1311a23a6917528041724503c5e1fb5337bf9c815e2917690d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1709867379130513346,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcmj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43be60bc-c064-4f45-9653-15b886260114,},Annotations:map[string]string{io.kubernetes.container.hash: e096bb6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025,PodSandboxId:5b9d25fbfde63add7976bb6254d450e815ec3266ac0f6dd8ad770e7f9496297f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1709867359284507763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26cdb4c7afaf223219da4d02f01a1ea4,},Annotations:map[string]string{io.kubernetes.container.hash: ae648b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446,PodSandboxId:7a8444878ab4c64be1eb8f4c35341868dfd5655fff56f2bd18019474bfefb228,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709867359110796101,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-576225,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af200b4f08e9aba6d5619bb32fa9f733,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c09b91aa-da8e-41e8-9a0d-f7865397df61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08c05f03945c6       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   5e3b38f17f036       kindnet-dxqvf
	7961f33abef9e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   1b2964d418016       storage-provisioner
	e98027e15146a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   299d8fccfabec       kube-controller-manager-ha-576225
	ba2990314b56f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   a28ed1ee400c9       busybox-5b5d89c9d6-9594n
	690c7f04f7df3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   bad7a444aad7c       kube-apiserver-ha-576225
	247265ee3f9ea       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   18a4467d6c1a6       kube-vip-ha-576225
	330abab8c9d77       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   8f4d0b4c36be7       kube-proxy-pcmj2
	f39e571f16421       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   1b2964d418016       storage-provisioner
	a2f20b74182ef       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   fa3754a5a1980       coredns-5dd5756b68-pqz96
	fb96559bcdaca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   7b5a7e1bf92b7       coredns-5dd5756b68-8qvhp
	32c08296db363       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   ca908871c8b99       etcd-ha-576225
	cf5e9db04d632       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   299d8fccfabec       kube-controller-manager-ha-576225
	41152db457cd3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   d5cc3aab4490e       kube-scheduler-ha-576225
	9417e2d81aaec       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   bad7a444aad7c       kube-apiserver-ha-576225
	8c8be87a59f4f       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   5e3b38f17f036       kindnet-dxqvf
	4b7d5042ade29       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      8 minutes ago       Exited              kube-vip                  2                   a6b1803470779       kube-vip-ha-576225
	c5282718f03eb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   0524f01439e2f       busybox-5b5d89c9d6-9594n
	c29d3c09ae3c4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   632fde5a7793c       coredns-5dd5756b68-8qvhp
	e6551e5e70b01       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   5d9f21a723332       coredns-5dd5756b68-pqz96
	da2c9bb706201       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   9f60642cbf5af       kube-proxy-pcmj2
	79db3710d20d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Exited              etcd                      0                   5b9d25fbfde63       etcd-ha-576225
	77dc7f2494354       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Exited              kube-scheduler            0                   7a8444878ab4c       kube-scheduler-ha-576225
	
	
	==> coredns [a2f20b74182eff8f7cfac8e2b79e9720b0c65d9ff846ecba28d401a7d0ee2b0d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38636 - 32327 "HINFO IN 498640267154758940.948528575063073994. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011032601s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c29d3c09ae3c49684dd236d3720f5a5c7bb0cbb703cea1ba1fdce876204d0788] <==
	[INFO] 10.244.0.4:54781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156597s
	[INFO] 10.244.2.2:60351 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156855s
	[INFO] 10.244.2.2:51544 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122332s
	[INFO] 10.244.2.2:36974 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001216836s
	[INFO] 10.244.2.2:46648 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079695s
	[INFO] 10.244.2.2:59692 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116087s
	[INFO] 10.244.1.2:55081 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181347s
	[INFO] 10.244.1.2:33288 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001414035s
	[INFO] 10.244.1.2:34740 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200343s
	[INFO] 10.244.1.2:34593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089308s
	[INFO] 10.244.0.4:57556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168693s
	[INFO] 10.244.0.4:55624 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070785s
	[INFO] 10.244.2.2:38017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203686s
	[INFO] 10.244.2.2:38702 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000143629s
	[INFO] 10.244.2.2:39439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082627s
	[INFO] 10.244.1.2:41980 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276421s
	[INFO] 10.244.0.4:55612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118127s
	[INFO] 10.244.0.4:54270 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081257s
	[INFO] 10.244.2.2:49847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192089s
	[INFO] 10.244.2.2:45358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198525s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6551e5e70b016e7655de205edf965c79fb6f1e5e77c6b824513ad4e3dcb11df] <==
	[INFO] 10.244.2.2:44074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245211s
	[INFO] 10.244.2.2:47550 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020143s
	[INFO] 10.244.1.2:36967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124177s
	[INFO] 10.244.1.2:49099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135326s
	[INFO] 10.244.1.2:38253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253563s
	[INFO] 10.244.1.2:39140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097524s
	[INFO] 10.244.0.4:50886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000066375s
	[INFO] 10.244.0.4:36001 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044745s
	[INFO] 10.244.2.2:52701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189269s
	[INFO] 10.244.1.2:56384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178001s
	[INFO] 10.244.1.2:57745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181456s
	[INFO] 10.244.1.2:36336 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125903s
	[INFO] 10.244.0.4:51847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152568s
	[INFO] 10.244.0.4:40398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000222601s
	[INFO] 10.244.2.2:39215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179733s
	[INFO] 10.244.2.2:44810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018976s
	[INFO] 10.244.1.2:53930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169054s
	[INFO] 10.244.1.2:39490 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132254s
	[INFO] 10.244.1.2:45653 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129104s
	[INFO] 10.244.1.2:57813 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154053s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fb96559bcdaca800030bf4b26e30f111db116afc8677238d6989756133c6dd84] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:41807 - 37908 "HINFO IN 8968042253440839441.6866134497195940646. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.10461782s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:49664->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-576225
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:25:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:21:18 +0000   Fri, 08 Mar 2024 03:09:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-576225
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 1732a5e385cf44ce86b216e3f63b18e9
	  System UUID:                1732a5e3-85cf-44ce-86b2-16e3f63b18e9
	  Boot ID:                    22459aef-7ea9-46db-b507-1fb97d6edacd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9594n             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-8qvhp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-pqz96             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-576225                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-dxqvf                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-576225             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-576225    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-pcmj2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-576225             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-576225                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m25s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-576225 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-576225 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-576225 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-576225 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Warning  ContainerGCFailed        5m17s (x2 over 6m17s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-576225 event: Registered Node ha-576225 in Controller
	
	
	Name:               ha-576225-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_10_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:10:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:25:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:22:01 +0000   Fri, 08 Mar 2024 03:21:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    ha-576225-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 852d29792aec4a87b8b6c74704738411
	  System UUID:                852d2979-2aec-4a87-b8b6-c74704738411
	  Boot ID:                    24134511-472b-4c29-ab6a-e21202d1931a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wlj7r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-576225-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-w8zww                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-576225-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-576225-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vjfqv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-576225-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-576225-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  RegisteredNode           15m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-576225-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-576225-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-576225-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-576225-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-576225-m02 event: Registered Node ha-576225-m02 in Controller
	
	
	Name:               ha-576225-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-576225-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=ha-576225
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_13_02_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:13:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-576225-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:23:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:24:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:24:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:24:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 08 Mar 2024 03:22:58 +0000   Fri, 08 Mar 2024 03:24:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-576225-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 524efacfa67040b0afe359afd19efdd6
	  System UUID:                524efacf-a670-40b0-afe3-59afd19efdd6
	  Boot ID:                    ef4a3e89-b497-42bb-928e-511b4417aeef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tbsl5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-5qbg6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-mk2g8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)      kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)      kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)      kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-576225-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-576225-m04 event: Registered Node ha-576225-m04 in Controller
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-576225-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-576225-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-576225-m04 has been rebooted, boot id: ef4a3e89-b497-42bb-928e-511b4417aeef
	  Normal   NodeReady                2m49s                  kubelet          Node ha-576225-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m32s)   node-controller  Node ha-576225-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 8 03:09] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.056257] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063726] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.163955] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.153131] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264990] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.215071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060445] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.086248] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.235554] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.086526] kauditd_printk_skb: 40 callbacks suppressed
	[  +2.541733] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[ +10.298670] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.185227] kauditd_printk_skb: 68 callbacks suppressed
	[Mar 8 03:20] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.158565] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.179387] systemd-fstab-generator[3814]: Ignoring "noauto" option for root device
	[  +0.155760] systemd-fstab-generator[3826]: Ignoring "noauto" option for root device
	[  +0.261817] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[  +0.817334] systemd-fstab-generator[3957]: Ignoring "noauto" option for root device
	[  +6.712248] kauditd_printk_skb: 132 callbacks suppressed
	[ +13.785911] kauditd_printk_skb: 83 callbacks suppressed
	[Mar 8 03:21] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [32c08296db3633e8a1825df7ed1cbf0115ba36d32dd7bf43d5853682b76af3c1] <==
	{"level":"info","ts":"2024-03-08T03:22:14.62788Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:14.631091Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9ebeb2ab026a2136","to":"3687119b759a7dfe","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-08T03:22:14.631117Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:22:17.548993Z","caller":"traceutil/trace.go:171","msg":"trace[560031978] transaction","detail":"{read_only:false; response_revision:2268; number_of_response:1; }","duration":"170.527078ms","start":"2024-03-08T03:22:17.378393Z","end":"2024-03-08T03:22:17.54892Z","steps":["trace[560031978] 'process raft request'  (duration: 170.394537ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:23:03.263082Z","caller":"traceutil/trace.go:171","msg":"trace[743005755] transaction","detail":"{read_only:false; response_revision:2442; number_of_response:1; }","duration":"145.771747ms","start":"2024-03-08T03:23:03.117273Z","end":"2024-03-08T03:23:03.263045Z","steps":["trace[743005755] 'process raft request'  (duration: 145.596177ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:23:03.264143Z","caller":"traceutil/trace.go:171","msg":"trace[423671852] linearizableReadLoop","detail":"{readStateIndex:2850; appliedIndex:2851; }","duration":"128.583101ms","start":"2024-03-08T03:23:03.135534Z","end":"2024-03-08T03:23:03.264118Z","steps":["trace[423671852] 'read index received'  (duration: 128.578922ms)","trace[423671852] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T03:23:03.264546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.934025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-08T03:23:03.265552Z","caller":"traceutil/trace.go:171","msg":"trace[501016056] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2442; }","duration":"130.02849ms","start":"2024-03-08T03:23:03.135507Z","end":"2024-03-08T03:23:03.265536Z","steps":["trace[501016056] 'agreement among raft nodes before linearized reading'  (duration: 128.705642ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:23:12.495791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ebeb2ab026a2136 switched to configuration voters=(11438776551117300022 17589384727122933662)"}
	{"level":"info","ts":"2024-03-08T03:23:12.496301Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e90308ed4eec0237","local-member-id":"9ebeb2ab026a2136","removed-remote-peer-id":"3687119b759a7dfe","removed-remote-peer-urls":["https://192.168.39.17:2380"]}
	{"level":"info","ts":"2024-03-08T03:23:12.496541Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.497026Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:23:12.497116Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.497567Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:23:12.497645Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:23:12.497783Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.498055Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe","error":"context canceled"}
	{"level":"warn","ts":"2024-03-08T03:23:12.499303Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3687119b759a7dfe","error":"failed to read 3687119b759a7dfe on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-08T03:23:12.499462Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.49978Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe","error":"context canceled"}
	{"level":"info","ts":"2024-03-08T03:23:12.499843Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:23:12.499897Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:23:12.499949Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"9ebeb2ab026a2136","removed-remote-peer-id":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.50973Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"9ebeb2ab026a2136","remote-peer-id-stream-handler":"9ebeb2ab026a2136","remote-peer-id-from":"3687119b759a7dfe"}
	{"level":"warn","ts":"2024-03-08T03:23:12.518751Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.17:40692","server-name":"","error":"read tcp 192.168.39.251:2380->192.168.39.17:40692: read: connection reset by peer"}
	
	
	==> etcd [79db3710d20d9dbe58583e27a0650e02c0dc6fdc6fe45d34eeb195e6eecbc025] <==
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-03-08T03:18:56.62851Z","caller":"traceutil/trace.go:171","msg":"trace[1597261702] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"7.818788021s","start":"2024-03-08T03:18:48.809719Z","end":"2024-03-08T03:18:56.628507Z","steps":["trace[1597261702] 'agreement among raft nodes before linearized reading'  (duration: 7.811415697s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T03:18:56.62853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T03:18:48.809716Z","time spent":"7.818801743s","remote":"127.0.0.1:33834","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" limit:10000 "}
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/08 03:18:56 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-08T03:18:56.772422Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.251:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:18:56.772568Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.251:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T03:18:56.772648Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9ebeb2ab026a2136","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-08T03:18:56.772929Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773048Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.7731Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773228Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773393Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.77349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773522Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f41a0c377dd7f79e"}
	{"level":"info","ts":"2024-03-08T03:18:56.773546Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.773579Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.77365Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.773941Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774087Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774255Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9ebeb2ab026a2136","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.774299Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3687119b759a7dfe"}
	{"level":"info","ts":"2024-03-08T03:18:56.777752Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.251:2380"}
	{"level":"info","ts":"2024-03-08T03:18:56.778027Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.251:2380"}
	{"level":"info","ts":"2024-03-08T03:18:56.778067Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-576225","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.251:2380"],"advertise-client-urls":["https://192.168.39.251:2379"]}
	
	
	==> kernel <==
	 03:25:47 up 17 min,  0 users,  load average: 0.15, 0.29, 0.31
	Linux ha-576225 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [08c05f03945c6abfd66721467401c14fa38cfa15415202fbd8a0e7fb2a0d904f] <==
	I0308 03:25:03.222630       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:25:13.240661       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:25:13.240718       1 main.go:227] handling current node
	I0308 03:25:13.240728       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:25:13.240736       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:25:13.240859       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:25:13.240865       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:25:23.253394       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:25:23.253448       1 main.go:227] handling current node
	I0308 03:25:23.253474       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:25:23.253480       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:25:23.253645       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:25:23.253680       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:25:33.267584       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:25:33.267637       1 main.go:227] handling current node
	I0308 03:25:33.267647       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:25:33.267653       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:25:33.267750       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:25:33.267788       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	I0308 03:25:43.274733       1 main.go:223] Handling node with IPs: map[192.168.39.251:{}]
	I0308 03:25:43.278206       1 main.go:227] handling current node
	I0308 03:25:43.278224       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0308 03:25:43.278233       1 main.go:250] Node ha-576225-m02 has CIDR [10.244.1.0/24] 
	I0308 03:25:43.278465       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0308 03:25:43.278503       1 main.go:250] Node ha-576225-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409] <==
	I0308 03:20:30.680517       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0308 03:20:30.680669       1 main.go:107] hostIP = 192.168.39.251
	podIP = 192.168.39.251
	I0308 03:20:30.680872       1 main.go:116] setting mtu 1500 for CNI 
	I0308 03:20:30.680914       1 main.go:146] kindnetd IP family: "ipv4"
	I0308 03:20:30.680952       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0308 03:20:34.078716       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0308 03:20:34.079207       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:35.080112       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:37.083032       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0308 03:20:50.092736       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [690c7f04f7df3cfd4f0d779981a08da50acd31f508abb33ec8d6342ba8a36d37] <==
	I0308 03:21:17.217793       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0308 03:21:17.217824       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0308 03:21:17.217859       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 03:21:17.254673       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 03:21:17.254713       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 03:21:17.350519       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:21:17.357819       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 03:21:17.358040       1 aggregator.go:166] initial CRD sync complete...
	I0308 03:21:17.358089       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 03:21:17.358097       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 03:21:17.358103       1 cache.go:39] Caches are synced for autoregister controller
	I0308 03:21:17.401287       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 03:21:17.407221       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 03:21:17.407371       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 03:21:17.407280       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 03:21:17.407529       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 03:21:17.409584       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:21:17.409917       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0308 03:21:17.422927       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17]
	I0308 03:21:17.424240       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 03:21:17.432309       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0308 03:21:17.437088       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0308 03:21:18.218309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 03:21:18.761776       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.251]
	W0308 03:23:18.765957       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.128 192.168.39.251]
	
	
	==> kube-apiserver [9417e2d81aaece417e3fcd2cc9e0612a53bc74120c26c844ab2da3c9208e97f4] <==
	I0308 03:20:37.619582       1 options.go:220] external host was not specified, using 192.168.39.251
	I0308 03:20:37.625566       1 server.go:148] Version: v1.28.4
	I0308 03:20:37.625623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:20:38.412562       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0308 03:20:38.416641       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0308 03:20:38.416758       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0308 03:20:38.417012       1 instance.go:298] Using reconciler: lease
	W0308 03:20:58.407689       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0308 03:20:58.412420       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0308 03:20:58.418244       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [cf5e9db04d632dc389b6d7cf3fe85c5010cc1975f70e2de4dbb42ae7d3a80785] <==
	I0308 03:20:38.552213       1 serving.go:348] Generated self-signed cert in-memory
	I0308 03:20:38.957256       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0308 03:20:38.957446       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:20:38.959685       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0308 03:20:38.959838       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:20:38.960066       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 03:20:38.960254       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0308 03:20:59.426178       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.251:8443/healthz\": dial tcp 192.168.39.251:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e98027e15146aea1dcdd91e8dfb786bd5094ff1881cccf4f45e3eeef75ee98c7] <==
	I0308 03:24:00.714541       1 event.go:307] "Event occurred" object="ha-576225-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-576225-m04 status is now: NodeNotReady"
	I0308 03:24:00.739160       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-tbsl5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:24:00.770293       1 event.go:307] "Event occurred" object="kube-system/kindnet-5qbg6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:24:00.802452       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-mk2g8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:24:00.816101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="78.198887ms"
	I0308 03:24:00.816237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.347µs"
	E0308 03:24:15.332865       1 gc_controller.go:153] "Failed to get node" err="node \"ha-576225-m03\" not found" node="ha-576225-m03"
	E0308 03:24:15.333034       1 gc_controller.go:153] "Failed to get node" err="node \"ha-576225-m03\" not found" node="ha-576225-m03"
	E0308 03:24:15.333075       1 gc_controller.go:153] "Failed to get node" err="node \"ha-576225-m03\" not found" node="ha-576225-m03"
	E0308 03:24:15.333183       1 gc_controller.go:153] "Failed to get node" err="node \"ha-576225-m03\" not found" node="ha-576225-m03"
	E0308 03:24:15.333234       1 gc_controller.go:153] "Failed to get node" err="node \"ha-576225-m03\" not found" node="ha-576225-m03"
	I0308 03:24:15.345704       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/etcd-ha-576225-m03"
	I0308 03:24:15.377579       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/etcd-ha-576225-m03"
	I0308 03:24:15.377763       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-vip-ha-576225-m03"
	I0308 03:24:15.415941       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-vip-ha-576225-m03"
	I0308 03:24:15.415990       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-controller-manager-ha-576225-m03"
	I0308 03:24:15.446704       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-controller-manager-ha-576225-m03"
	I0308 03:24:15.446752       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-scheduler-ha-576225-m03"
	I0308 03:24:15.478125       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-scheduler-ha-576225-m03"
	I0308 03:24:15.478170       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-apiserver-ha-576225-m03"
	I0308 03:24:15.508742       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-apiserver-ha-576225-m03"
	I0308 03:24:15.508817       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-gqc9f"
	I0308 03:24:15.540879       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-gqc9f"
	I0308 03:24:15.541198       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-j425g"
	I0308 03:24:15.573052       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-j425g"
	
	
	==> kube-proxy [330abab8c9d779f5917453b80f35a36600876aaf596f3cda332ec09a38357ab2] <==
	I0308 03:20:39.116957       1 server_others.go:69] "Using iptables proxy"
	E0308 03:20:42.143719       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:45.216004       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:48.288071       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:20:54.431135       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:21:03.647732       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-576225": dial tcp 192.168.39.254:8443: connect: no route to host
	I0308 03:21:21.058744       1 node.go:141] Successfully retrieved node IP: 192.168.39.251
	I0308 03:21:21.106299       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:21:21.106459       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:21:21.109094       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:21:21.109204       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:21:21.109554       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:21:21.109589       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:21:21.110881       1 config.go:188] "Starting service config controller"
	I0308 03:21:21.110948       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:21:21.110974       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:21:21.111005       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:21:21.111903       1 config.go:315] "Starting node config controller"
	I0308 03:21:21.113921       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:21:21.211596       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 03:21:21.211655       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:21:21.214180       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [da2c9bb706201e74eb19d0cca0f8ecb95795e7b71d5feef424c304a1a02c4176] <==
	E0308 03:17:33.855048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:33.855116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:33.855156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.510787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.510974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.510787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.511041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:40.513478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:40.513543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:50.110881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:50.111079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:50.111193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:50.111246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:17:53.182851       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:17:53.183740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:11.616928       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:11.617108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:11.617568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:11.617713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:14.687242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:14.687444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:42.335669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:42.335769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-576225&resourceVersion=1778": dial tcp 192.168.39.254:8443: connect: no route to host
	W0308 03:18:51.552892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	E0308 03:18:51.553049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1775": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [41152db457cd329461ac82ee98740ecda4b8179fe6e5ecc6e19d00ae0803c603] <==
	W0308 03:21:08.314525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.251:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.314930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.251:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.415719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.251:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.415832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.251:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.470166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.251:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.470261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.251:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.573987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.39.251:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.574157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.251:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:08.996437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.251:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	E0308 03:21:08.996517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.251:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.251:8443: connect: connection refused
	W0308 03:21:17.267821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:21:17.267942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:21:17.268045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:21:17.268081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:21:17.268197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 03:21:17.268262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 03:21:17.268396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:21:17.268444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:21:17.268512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 03:21:17.268543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 03:21:17.268620       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:21:17.268648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 03:21:17.268762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 03:21:17.268797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0308 03:21:18.929422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [77dc7f2494354dc4d9b78cf37529b63403338a830ced00a5cfe98cdcf2a91446] <==
	W0308 03:18:48.770619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:18:48.770711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:18:49.106141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 03:18:49.106274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 03:18:49.316718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:49.316778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:49.393518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:49.393676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:49.988889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:18:49.988982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:18:50.042581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:18:50.042654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:18:50.174771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 03:18:50.174824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 03:18:50.849107       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:18:50.849160       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:18:51.687067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:18:51.687190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:18:51.704684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:18:51.704806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:18:51.746918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 03:18:51.746991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0308 03:18:56.588530       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 03:18:56.588639       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0308 03:18:56.588822       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 08 03:21:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:21:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:21:41 ha-576225 kubelet[1359]: I0308 03:21:41.953108    1359 scope.go:117] "RemoveContainer" containerID="8c8be87a59f4f4f3c45e56670e76baa62aa63d5dea50255601ce44dd05b09409"
	Mar 08 03:21:41 ha-576225 kubelet[1359]: I0308 03:21:41.953722    1359 scope.go:117] "RemoveContainer" containerID="f39e571f16421306fb7fe06535380691e97da5f516ce544527d73b6fb3f4c291"
	Mar 08 03:22:13 ha-576225 kubelet[1359]: I0308 03:22:13.860401    1359 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-9594n" podStartSLOduration=587.93385144 podCreationTimestamp="2024-03-08 03:12:25 +0000 UTC" firstStartedPulling="2024-03-08 03:12:26.402260285 +0000 UTC m=+177.641149284" lastFinishedPulling="2024-03-08 03:12:27.328627098 +0000 UTC m=+178.567516114" observedRunningTime="2024-03-08 03:12:27.872774416 +0000 UTC m=+179.111663437" watchObservedRunningTime="2024-03-08 03:22:13.86021827 +0000 UTC m=+765.099107290"
	Mar 08 03:22:29 ha-576225 kubelet[1359]: E0308 03:22:29.003500    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:22:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:22:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:23:29 ha-576225 kubelet[1359]: E0308 03:23:29.007597    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:23:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:23:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:23:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:23:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:24:29 ha-576225 kubelet[1359]: E0308 03:24:29.008696    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:24:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:24:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:24:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:24:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:25:29 ha-576225 kubelet[1359]: E0308 03:25:29.001914    1359 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:25:29 ha-576225 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:25:29 ha-576225 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:25:29 ha-576225 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:25:29 ha-576225 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:25:46.008484  935962 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18333-911675/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-576225 -n ha-576225
helpers_test.go:261: (dbg) Run:  kubectl --context ha-576225 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopCluster (141.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959285
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-959285
E0308 03:41:35.304830  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-959285: exit status 82 (2m2.700237265s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-959285-m03"  ...
	* Stopping node "multinode-959285-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-959285" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959285 --wait=true -v=8 --alsologtostderr
E0308 03:42:52.008345  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:43:32.256688  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959285 --wait=true -v=8 --alsologtostderr: (3m8.450588759s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959285
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959285 -n multinode-959285
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 logs -n 25
E0308 03:45:55.053634  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-959285 logs -n 25: (1.644661383s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285:/home/docker/cp-test_multinode-959285-m02_multinode-959285.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285 sudo cat                                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m02_multinode-959285.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03:/home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285-m03 sudo cat                                   | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp testdata/cp-test.txt                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285:/home/docker/cp-test_multinode-959285-m03_multinode-959285.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285 sudo cat                                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02:/home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285-m02 sudo cat                                   | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-959285 node stop m03                                                          | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	| node    | multinode-959285 node start                                                             | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| stop    | -p multinode-959285                                                                     | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| start   | -p multinode-959285                                                                     | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:42 UTC | 08 Mar 24 03:45 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:42:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:42:45.326243  944177 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:42:45.326525  944177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:42:45.326542  944177 out.go:304] Setting ErrFile to fd 2...
	I0308 03:42:45.326550  944177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:42:45.327129  944177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:42:45.328115  944177 out.go:298] Setting JSON to false
	I0308 03:42:45.329071  944177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26691,"bootTime":1709842674,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:42:45.329133  944177 start.go:139] virtualization: kvm guest
	I0308 03:42:45.331242  944177 out.go:177] * [multinode-959285] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:42:45.332551  944177 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:42:45.333891  944177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:42:45.332536  944177 notify.go:220] Checking for updates...
	I0308 03:42:45.335341  944177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:42:45.336544  944177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:42:45.337669  944177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:42:45.338758  944177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:42:45.340258  944177 config.go:182] Loaded profile config "multinode-959285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:42:45.340368  944177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:42:45.340762  944177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:42:45.340817  944177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:42:45.357046  944177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0308 03:42:45.357564  944177 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:42:45.358202  944177 main.go:141] libmachine: Using API Version  1
	I0308 03:42:45.358223  944177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:42:45.358584  944177 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:42:45.358773  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.393367  944177 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:42:45.394513  944177 start.go:297] selected driver: kvm2
	I0308 03:42:45.394523  944177 start.go:901] validating driver "kvm2" against &{Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:42:45.394636  944177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:42:45.394953  944177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:42:45.395010  944177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:42:45.410088  944177 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:42:45.410940  944177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:42:45.411041  944177 cni.go:84] Creating CNI manager for ""
	I0308 03:42:45.411059  944177 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 03:42:45.411114  944177 start.go:340] cluster config:
	{Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:42:45.411255  944177 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:42:45.413712  944177 out.go:177] * Starting "multinode-959285" primary control-plane node in "multinode-959285" cluster
	I0308 03:42:45.414935  944177 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:42:45.414972  944177 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:42:45.414983  944177 cache.go:56] Caching tarball of preloaded images
	I0308 03:42:45.415076  944177 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:42:45.415089  944177 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:42:45.415206  944177 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/config.json ...
	I0308 03:42:45.415400  944177 start.go:360] acquireMachinesLock for multinode-959285: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:42:45.415445  944177 start.go:364] duration metric: took 22.911µs to acquireMachinesLock for "multinode-959285"
	I0308 03:42:45.415458  944177 start.go:96] Skipping create...Using existing machine configuration
	I0308 03:42:45.415466  944177 fix.go:54] fixHost starting: 
	I0308 03:42:45.415758  944177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:42:45.415792  944177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:42:45.429881  944177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44923
	I0308 03:42:45.430303  944177 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:42:45.430779  944177 main.go:141] libmachine: Using API Version  1
	I0308 03:42:45.430842  944177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:42:45.431197  944177 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:42:45.431452  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.431656  944177 main.go:141] libmachine: (multinode-959285) Calling .GetState
	I0308 03:42:45.433353  944177 fix.go:112] recreateIfNeeded on multinode-959285: state=Running err=<nil>
	W0308 03:42:45.433393  944177 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 03:42:45.435311  944177 out.go:177] * Updating the running kvm2 "multinode-959285" VM ...
	I0308 03:42:45.436431  944177 machine.go:94] provisionDockerMachine start ...
	I0308 03:42:45.436455  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.436700  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.439169  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.439624  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.439661  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.439748  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.439922  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.440074  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.440198  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.440380  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.440602  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.440618  944177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 03:42:45.550914  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959285
	
	I0308 03:42:45.550948  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.551189  944177 buildroot.go:166] provisioning hostname "multinode-959285"
	I0308 03:42:45.551223  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.551394  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.554100  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.554445  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.554489  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.554580  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.554770  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.554926  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.555052  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.555232  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.555402  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.555415  944177 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959285 && echo "multinode-959285" | sudo tee /etc/hostname
	I0308 03:42:45.685819  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959285
	
	I0308 03:42:45.685862  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.688887  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.689338  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.689375  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.689609  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.689807  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.689997  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.690119  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.690277  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.690500  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.690519  944177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959285/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:42:45.798639  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:42:45.798668  944177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:42:45.798686  944177 buildroot.go:174] setting up certificates
	I0308 03:42:45.798695  944177 provision.go:84] configureAuth start
	I0308 03:42:45.798707  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.798976  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:42:45.801477  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.801805  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.801840  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.802023  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.804205  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.804533  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.804571  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.804757  944177 provision.go:143] copyHostCerts
	I0308 03:42:45.804784  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:42:45.804830  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:42:45.804840  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:42:45.804904  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:42:45.804970  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:42:45.804990  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:42:45.804994  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:42:45.805016  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:42:45.805063  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:42:45.805079  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:42:45.805085  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:42:45.805111  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:42:45.805159  944177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.multinode-959285 san=[127.0.0.1 192.168.39.174 localhost minikube multinode-959285]
	I0308 03:42:46.005417  944177 provision.go:177] copyRemoteCerts
	I0308 03:42:46.005491  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:42:46.005520  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:46.008149  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.008480  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:46.008501  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.008722  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:46.008929  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.009118  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:46.009254  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:42:46.096891  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:42:46.096950  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:42:46.129177  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:42:46.129227  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0308 03:42:46.158908  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:42:46.158957  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 03:42:46.188198  944177 provision.go:87] duration metric: took 389.488654ms to configureAuth
	I0308 03:42:46.188227  944177 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:42:46.188473  944177 config.go:182] Loaded profile config "multinode-959285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:42:46.188591  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:46.190947  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.191369  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:46.191395  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.191506  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:46.191691  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.191853  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.192048  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:46.192205  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:46.192364  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:46.192378  944177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:44:17.023961  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:44:17.023994  944177 machine.go:97] duration metric: took 1m31.587546513s to provisionDockerMachine
	I0308 03:44:17.024011  944177 start.go:293] postStartSetup for "multinode-959285" (driver="kvm2")
	I0308 03:44:17.024028  944177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:44:17.024062  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.024467  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:44:17.024505  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.027909  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.028374  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.028414  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.028608  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.028796  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.028966  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.029119  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.113576  944177 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:44:17.117878  944177 command_runner.go:130] > NAME=Buildroot
	I0308 03:44:17.117891  944177 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 03:44:17.117896  944177 command_runner.go:130] > ID=buildroot
	I0308 03:44:17.117900  944177 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 03:44:17.117905  944177 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 03:44:17.118031  944177 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:44:17.118068  944177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:44:17.118138  944177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:44:17.118210  944177 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:44:17.118221  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:44:17.118305  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:44:17.128647  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:44:17.153824  944177 start.go:296] duration metric: took 129.801621ms for postStartSetup
	I0308 03:44:17.153878  944177 fix.go:56] duration metric: took 1m31.738411758s for fixHost
	I0308 03:44:17.153900  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.156530  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.156913  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.156934  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.157101  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.157270  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.157471  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.157610  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.157803  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:44:17.157992  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:44:17.158005  944177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:44:17.262164  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709869457.245029886
	
	I0308 03:44:17.262193  944177 fix.go:216] guest clock: 1709869457.245029886
	I0308 03:44:17.262202  944177 fix.go:229] Guest: 2024-03-08 03:44:17.245029886 +0000 UTC Remote: 2024-03-08 03:44:17.153885528 +0000 UTC m=+91.878096196 (delta=91.144358ms)
	I0308 03:44:17.262230  944177 fix.go:200] guest clock delta is within tolerance: 91.144358ms
	I0308 03:44:17.262237  944177 start.go:83] releasing machines lock for "multinode-959285", held for 1m31.846782767s
	I0308 03:44:17.262267  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.262537  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:44:17.265137  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.265588  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.265627  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.265698  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266311  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266535  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266633  944177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:44:17.266675  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.266784  944177 ssh_runner.go:195] Run: cat /version.json
	I0308 03:44:17.266823  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.269408  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.269804  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.269844  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.269870  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.270018  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.270191  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.270304  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.270329  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.270345  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.270492  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.270517  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.270634  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.270764  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.270896  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.349904  944177 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0308 03:44:17.376941  944177 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 03:44:17.377874  944177 ssh_runner.go:195] Run: systemctl --version
	I0308 03:44:17.383700  944177 command_runner.go:130] > systemd 252 (252)
	I0308 03:44:17.383729  944177 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0308 03:44:17.384041  944177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:44:17.544085  944177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 03:44:17.552561  944177 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0308 03:44:17.552718  944177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:44:17.552790  944177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:44:17.562746  944177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 03:44:17.562764  944177 start.go:494] detecting cgroup driver to use...
	I0308 03:44:17.562833  944177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:44:17.579303  944177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:44:17.593537  944177 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:44:17.593582  944177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:44:17.607527  944177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:44:17.622146  944177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:44:17.769218  944177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:44:17.918989  944177 docker.go:233] disabling docker service ...
	I0308 03:44:17.919054  944177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:44:17.934940  944177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:44:17.948677  944177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:44:18.097496  944177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:44:18.252890  944177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:44:18.270021  944177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:44:18.290297  944177 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0308 03:44:18.290724  944177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:44:18.290799  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.302553  944177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:44:18.302616  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.315049  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.326587  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.338201  944177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:44:18.349941  944177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:44:18.360110  944177 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 03:44:18.360384  944177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:44:18.370437  944177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:44:18.515250  944177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:44:28.258888  944177 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.743596376s)
	I0308 03:44:28.258927  944177 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:44:28.258992  944177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:44:28.264465  944177 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0308 03:44:28.264517  944177 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 03:44:28.264528  944177 command_runner.go:130] > Device: 0,22	Inode: 1334        Links: 1
	I0308 03:44:28.264539  944177 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 03:44:28.264545  944177 command_runner.go:130] > Access: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264553  944177 command_runner.go:130] > Modify: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264561  944177 command_runner.go:130] > Change: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264567  944177 command_runner.go:130] >  Birth: -
	I0308 03:44:28.264679  944177 start.go:562] Will wait 60s for crictl version
	I0308 03:44:28.264736  944177 ssh_runner.go:195] Run: which crictl
	I0308 03:44:28.268886  944177 command_runner.go:130] > /usr/bin/crictl
	I0308 03:44:28.269015  944177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:44:28.312939  944177 command_runner.go:130] > Version:  0.1.0
	I0308 03:44:28.312957  944177 command_runner.go:130] > RuntimeName:  cri-o
	I0308 03:44:28.312961  944177 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0308 03:44:28.312966  944177 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 03:44:28.313068  944177 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:44:28.313139  944177 ssh_runner.go:195] Run: crio --version
	I0308 03:44:28.343783  944177 command_runner.go:130] > crio version 1.29.1
	I0308 03:44:28.343799  944177 command_runner.go:130] > Version:        1.29.1
	I0308 03:44:28.343807  944177 command_runner.go:130] > GitCommit:      unknown
	I0308 03:44:28.343813  944177 command_runner.go:130] > GitCommitDate:  unknown
	I0308 03:44:28.343820  944177 command_runner.go:130] > GitTreeState:   clean
	I0308 03:44:28.343838  944177 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0308 03:44:28.343846  944177 command_runner.go:130] > GoVersion:      go1.21.6
	I0308 03:44:28.343853  944177 command_runner.go:130] > Compiler:       gc
	I0308 03:44:28.343862  944177 command_runner.go:130] > Platform:       linux/amd64
	I0308 03:44:28.343872  944177 command_runner.go:130] > Linkmode:       dynamic
	I0308 03:44:28.343880  944177 command_runner.go:130] > BuildTags:      
	I0308 03:44:28.343887  944177 command_runner.go:130] >   containers_image_ostree_stub
	I0308 03:44:28.343896  944177 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0308 03:44:28.343906  944177 command_runner.go:130] >   btrfs_noversion
	I0308 03:44:28.343915  944177 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0308 03:44:28.343926  944177 command_runner.go:130] >   libdm_no_deferred_remove
	I0308 03:44:28.343932  944177 command_runner.go:130] >   seccomp
	I0308 03:44:28.343940  944177 command_runner.go:130] > LDFlags:          unknown
	I0308 03:44:28.343947  944177 command_runner.go:130] > SeccompEnabled:   true
	I0308 03:44:28.343956  944177 command_runner.go:130] > AppArmorEnabled:  false
	I0308 03:44:28.344986  944177 ssh_runner.go:195] Run: crio --version
	I0308 03:44:28.375126  944177 command_runner.go:130] > crio version 1.29.1
	I0308 03:44:28.375147  944177 command_runner.go:130] > Version:        1.29.1
	I0308 03:44:28.375158  944177 command_runner.go:130] > GitCommit:      unknown
	I0308 03:44:28.375164  944177 command_runner.go:130] > GitCommitDate:  unknown
	I0308 03:44:28.375170  944177 command_runner.go:130] > GitTreeState:   clean
	I0308 03:44:28.375177  944177 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0308 03:44:28.375183  944177 command_runner.go:130] > GoVersion:      go1.21.6
	I0308 03:44:28.375188  944177 command_runner.go:130] > Compiler:       gc
	I0308 03:44:28.375195  944177 command_runner.go:130] > Platform:       linux/amd64
	I0308 03:44:28.375202  944177 command_runner.go:130] > Linkmode:       dynamic
	I0308 03:44:28.375214  944177 command_runner.go:130] > BuildTags:      
	I0308 03:44:28.375221  944177 command_runner.go:130] >   containers_image_ostree_stub
	I0308 03:44:28.375236  944177 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0308 03:44:28.375243  944177 command_runner.go:130] >   btrfs_noversion
	I0308 03:44:28.375258  944177 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0308 03:44:28.375265  944177 command_runner.go:130] >   libdm_no_deferred_remove
	I0308 03:44:28.375271  944177 command_runner.go:130] >   seccomp
	I0308 03:44:28.375278  944177 command_runner.go:130] > LDFlags:          unknown
	I0308 03:44:28.375288  944177 command_runner.go:130] > SeccompEnabled:   true
	I0308 03:44:28.375296  944177 command_runner.go:130] > AppArmorEnabled:  false
	I0308 03:44:28.377045  944177 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:44:28.378408  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:44:28.380963  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:28.381339  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:28.381368  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:28.381625  944177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:44:28.386091  944177 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0308 03:44:28.386183  944177 kubeadm.go:877] updating cluster {Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:44:28.386312  944177 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:44:28.386354  944177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:44:28.438045  944177 command_runner.go:130] > {
	I0308 03:44:28.438075  944177 command_runner.go:130] >   "images": [
	I0308 03:44:28.438080  944177 command_runner.go:130] >     {
	I0308 03:44:28.438092  944177 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0308 03:44:28.438098  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438107  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0308 03:44:28.438112  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438118  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438130  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0308 03:44:28.438148  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0308 03:44:28.438154  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438164  944177 command_runner.go:130] >       "size": "65258016",
	I0308 03:44:28.438171  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438178  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438190  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438200  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438208  944177 command_runner.go:130] >     },
	I0308 03:44:28.438213  944177 command_runner.go:130] >     {
	I0308 03:44:28.438224  944177 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0308 03:44:28.438233  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438244  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0308 03:44:28.438253  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438263  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438274  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0308 03:44:28.438288  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0308 03:44:28.438297  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438304  944177 command_runner.go:130] >       "size": "65291810",
	I0308 03:44:28.438312  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438329  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438347  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438353  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438361  944177 command_runner.go:130] >     },
	I0308 03:44:28.438367  944177 command_runner.go:130] >     {
	I0308 03:44:28.438379  944177 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0308 03:44:28.438389  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438400  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0308 03:44:28.438416  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438425  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438439  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0308 03:44:28.438454  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0308 03:44:28.438462  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438468  944177 command_runner.go:130] >       "size": "1363676",
	I0308 03:44:28.438478  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438487  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438496  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438506  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438514  944177 command_runner.go:130] >     },
	I0308 03:44:28.438522  944177 command_runner.go:130] >     {
	I0308 03:44:28.438534  944177 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0308 03:44:28.438543  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438554  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0308 03:44:28.438563  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438570  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438585  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0308 03:44:28.438607  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0308 03:44:28.438617  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438625  944177 command_runner.go:130] >       "size": "31470524",
	I0308 03:44:28.438633  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438642  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438651  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438660  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438665  944177 command_runner.go:130] >     },
	I0308 03:44:28.438673  944177 command_runner.go:130] >     {
	I0308 03:44:28.438684  944177 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0308 03:44:28.438693  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438704  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0308 03:44:28.438712  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438721  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438735  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0308 03:44:28.438750  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0308 03:44:28.438759  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438768  944177 command_runner.go:130] >       "size": "53621675",
	I0308 03:44:28.438784  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438794  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438803  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438818  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438827  944177 command_runner.go:130] >     },
	I0308 03:44:28.438835  944177 command_runner.go:130] >     {
	I0308 03:44:28.438844  944177 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0308 03:44:28.438854  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438871  944177 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0308 03:44:28.438879  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438888  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438901  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0308 03:44:28.438915  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0308 03:44:28.438924  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438930  944177 command_runner.go:130] >       "size": "295456551",
	I0308 03:44:28.438938  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.438947  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.438956  944177 command_runner.go:130] >       },
	I0308 03:44:28.438964  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438970  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438978  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438982  944177 command_runner.go:130] >     },
	I0308 03:44:28.438990  944177 command_runner.go:130] >     {
	I0308 03:44:28.438998  944177 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0308 03:44:28.439007  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439016  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0308 03:44:28.439024  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439030  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439042  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0308 03:44:28.439057  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0308 03:44:28.439064  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439070  944177 command_runner.go:130] >       "size": "127226832",
	I0308 03:44:28.439078  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439086  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439094  944177 command_runner.go:130] >       },
	I0308 03:44:28.439101  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439125  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439135  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439144  944177 command_runner.go:130] >     },
	I0308 03:44:28.439152  944177 command_runner.go:130] >     {
	I0308 03:44:28.439165  944177 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0308 03:44:28.439173  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439182  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0308 03:44:28.439191  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439199  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439231  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0308 03:44:28.439247  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0308 03:44:28.439256  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439265  944177 command_runner.go:130] >       "size": "123261750",
	I0308 03:44:28.439273  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439278  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439287  944177 command_runner.go:130] >       },
	I0308 03:44:28.439297  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439306  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439316  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439325  944177 command_runner.go:130] >     },
	I0308 03:44:28.439332  944177 command_runner.go:130] >     {
	I0308 03:44:28.439344  944177 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0308 03:44:28.439351  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439356  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0308 03:44:28.439360  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439363  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439373  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0308 03:44:28.439380  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0308 03:44:28.439384  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439387  944177 command_runner.go:130] >       "size": "74749335",
	I0308 03:44:28.439391  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.439395  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439400  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439404  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439407  944177 command_runner.go:130] >     },
	I0308 03:44:28.439411  944177 command_runner.go:130] >     {
	I0308 03:44:28.439423  944177 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0308 03:44:28.439430  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439435  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0308 03:44:28.439441  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439445  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439454  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0308 03:44:28.439463  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0308 03:44:28.439468  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439472  944177 command_runner.go:130] >       "size": "61551410",
	I0308 03:44:28.439478  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439482  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439488  944177 command_runner.go:130] >       },
	I0308 03:44:28.439492  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439498  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439502  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439508  944177 command_runner.go:130] >     },
	I0308 03:44:28.439512  944177 command_runner.go:130] >     {
	I0308 03:44:28.439520  944177 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0308 03:44:28.439526  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439531  944177 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0308 03:44:28.439537  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439541  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439550  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0308 03:44:28.439559  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0308 03:44:28.439564  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439568  944177 command_runner.go:130] >       "size": "750414",
	I0308 03:44:28.439576  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439585  944177 command_runner.go:130] >         "value": "65535"
	I0308 03:44:28.439594  944177 command_runner.go:130] >       },
	I0308 03:44:28.439603  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439612  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439620  944177 command_runner.go:130] >       "pinned": true
	I0308 03:44:28.439628  944177 command_runner.go:130] >     }
	I0308 03:44:28.439636  944177 command_runner.go:130] >   ]
	I0308 03:44:28.439644  944177 command_runner.go:130] > }
	I0308 03:44:28.439858  944177 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:44:28.439872  944177 crio.go:415] Images already preloaded, skipping extraction
	I0308 03:44:28.439924  944177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:44:28.476415  944177 command_runner.go:130] > {
	I0308 03:44:28.476443  944177 command_runner.go:130] >   "images": [
	I0308 03:44:28.476449  944177 command_runner.go:130] >     {
	I0308 03:44:28.476462  944177 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0308 03:44:28.476470  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476478  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0308 03:44:28.476483  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476488  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476516  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0308 03:44:28.476527  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0308 03:44:28.476531  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476535  944177 command_runner.go:130] >       "size": "65258016",
	I0308 03:44:28.476540  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476544  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476552  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476559  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476566  944177 command_runner.go:130] >     },
	I0308 03:44:28.476569  944177 command_runner.go:130] >     {
	I0308 03:44:28.476578  944177 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0308 03:44:28.476588  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476596  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0308 03:44:28.476605  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476612  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476627  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0308 03:44:28.476641  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0308 03:44:28.476650  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476659  944177 command_runner.go:130] >       "size": "65291810",
	I0308 03:44:28.476668  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476684  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476693  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476703  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476711  944177 command_runner.go:130] >     },
	I0308 03:44:28.476716  944177 command_runner.go:130] >     {
	I0308 03:44:28.476729  944177 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0308 03:44:28.476738  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476748  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0308 03:44:28.476755  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476759  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476768  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0308 03:44:28.476777  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0308 03:44:28.476783  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476788  944177 command_runner.go:130] >       "size": "1363676",
	I0308 03:44:28.476794  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476798  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476820  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476828  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476831  944177 command_runner.go:130] >     },
	I0308 03:44:28.476834  944177 command_runner.go:130] >     {
	I0308 03:44:28.476840  944177 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0308 03:44:28.476846  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476852  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0308 03:44:28.476858  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476862  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476872  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0308 03:44:28.476891  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0308 03:44:28.476899  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476909  944177 command_runner.go:130] >       "size": "31470524",
	I0308 03:44:28.476918  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476928  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476937  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476946  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476953  944177 command_runner.go:130] >     },
	I0308 03:44:28.476959  944177 command_runner.go:130] >     {
	I0308 03:44:28.476972  944177 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0308 03:44:28.476981  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476987  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0308 03:44:28.476993  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476998  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477007  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0308 03:44:28.477017  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0308 03:44:28.477030  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477038  944177 command_runner.go:130] >       "size": "53621675",
	I0308 03:44:28.477041  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.477048  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477052  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477059  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477062  944177 command_runner.go:130] >     },
	I0308 03:44:28.477066  944177 command_runner.go:130] >     {
	I0308 03:44:28.477072  944177 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0308 03:44:28.477079  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477088  944177 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0308 03:44:28.477094  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477098  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477107  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0308 03:44:28.477117  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0308 03:44:28.477122  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477127  944177 command_runner.go:130] >       "size": "295456551",
	I0308 03:44:28.477133  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477137  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477146  944177 command_runner.go:130] >       },
	I0308 03:44:28.477152  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477156  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477162  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477166  944177 command_runner.go:130] >     },
	I0308 03:44:28.477168  944177 command_runner.go:130] >     {
	I0308 03:44:28.477174  944177 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0308 03:44:28.477181  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477186  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0308 03:44:28.477191  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477196  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477205  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0308 03:44:28.477214  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0308 03:44:28.477220  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477225  944177 command_runner.go:130] >       "size": "127226832",
	I0308 03:44:28.477230  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477234  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477240  944177 command_runner.go:130] >       },
	I0308 03:44:28.477244  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477248  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477253  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477256  944177 command_runner.go:130] >     },
	I0308 03:44:28.477262  944177 command_runner.go:130] >     {
	I0308 03:44:28.477269  944177 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0308 03:44:28.477288  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477300  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0308 03:44:28.477308  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477318  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477342  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0308 03:44:28.477352  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0308 03:44:28.477357  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477361  944177 command_runner.go:130] >       "size": "123261750",
	I0308 03:44:28.477367  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477371  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477377  944177 command_runner.go:130] >       },
	I0308 03:44:28.477381  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477387  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477391  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477397  944177 command_runner.go:130] >     },
	I0308 03:44:28.477400  944177 command_runner.go:130] >     {
	I0308 03:44:28.477407  944177 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0308 03:44:28.477413  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477418  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0308 03:44:28.477424  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477428  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477437  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0308 03:44:28.477446  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0308 03:44:28.477454  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477461  944177 command_runner.go:130] >       "size": "74749335",
	I0308 03:44:28.477465  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.477471  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477475  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477482  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477487  944177 command_runner.go:130] >     },
	I0308 03:44:28.477495  944177 command_runner.go:130] >     {
	I0308 03:44:28.477505  944177 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0308 03:44:28.477514  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477524  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0308 03:44:28.477532  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477538  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477550  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0308 03:44:28.477562  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0308 03:44:28.477569  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477583  944177 command_runner.go:130] >       "size": "61551410",
	I0308 03:44:28.477593  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477603  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477610  944177 command_runner.go:130] >       },
	I0308 03:44:28.477617  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477625  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477633  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477641  944177 command_runner.go:130] >     },
	I0308 03:44:28.477649  944177 command_runner.go:130] >     {
	I0308 03:44:28.477658  944177 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0308 03:44:28.477667  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477674  944177 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0308 03:44:28.477682  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477692  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477705  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0308 03:44:28.477718  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0308 03:44:28.477726  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477734  944177 command_runner.go:130] >       "size": "750414",
	I0308 03:44:28.477743  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477752  944177 command_runner.go:130] >         "value": "65535"
	I0308 03:44:28.477760  944177 command_runner.go:130] >       },
	I0308 03:44:28.477765  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477773  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477782  944177 command_runner.go:130] >       "pinned": true
	I0308 03:44:28.477790  944177 command_runner.go:130] >     }
	I0308 03:44:28.477795  944177 command_runner.go:130] >   ]
	I0308 03:44:28.477803  944177 command_runner.go:130] > }
	I0308 03:44:28.478064  944177 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:44:28.478085  944177 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:44:28.478093  944177 kubeadm.go:928] updating node { 192.168.39.174 8443 v1.28.4 crio true true} ...
	I0308 03:44:28.478238  944177 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-959285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:44:28.478307  944177 ssh_runner.go:195] Run: crio config
	I0308 03:44:28.514581  944177 command_runner.go:130] ! time="2024-03-08 03:44:28.497000109Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0308 03:44:28.519869  944177 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0308 03:44:28.532868  944177 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0308 03:44:28.532888  944177 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0308 03:44:28.532894  944177 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0308 03:44:28.532909  944177 command_runner.go:130] > #
	I0308 03:44:28.532916  944177 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0308 03:44:28.532927  944177 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0308 03:44:28.532938  944177 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0308 03:44:28.532947  944177 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0308 03:44:28.532953  944177 command_runner.go:130] > # reload'.
	I0308 03:44:28.532959  944177 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0308 03:44:28.532967  944177 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0308 03:44:28.532973  944177 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0308 03:44:28.532980  944177 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0308 03:44:28.532984  944177 command_runner.go:130] > [crio]
	I0308 03:44:28.532991  944177 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0308 03:44:28.532997  944177 command_runner.go:130] > # containers images, in this directory.
	I0308 03:44:28.533004  944177 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0308 03:44:28.533013  944177 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0308 03:44:28.533020  944177 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0308 03:44:28.533028  944177 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0308 03:44:28.533034  944177 command_runner.go:130] > # imagestore = ""
	I0308 03:44:28.533041  944177 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0308 03:44:28.533049  944177 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0308 03:44:28.533053  944177 command_runner.go:130] > storage_driver = "overlay"
	I0308 03:44:28.533061  944177 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0308 03:44:28.533067  944177 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0308 03:44:28.533075  944177 command_runner.go:130] > storage_option = [
	I0308 03:44:28.533080  944177 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0308 03:44:28.533083  944177 command_runner.go:130] > ]
	I0308 03:44:28.533089  944177 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0308 03:44:28.533095  944177 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0308 03:44:28.533099  944177 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0308 03:44:28.533107  944177 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0308 03:44:28.533114  944177 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0308 03:44:28.533120  944177 command_runner.go:130] > # always happen on a node reboot
	I0308 03:44:28.533125  944177 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0308 03:44:28.533138  944177 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0308 03:44:28.533147  944177 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0308 03:44:28.533152  944177 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0308 03:44:28.533177  944177 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0308 03:44:28.533189  944177 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0308 03:44:28.533196  944177 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0308 03:44:28.533200  944177 command_runner.go:130] > # internal_wipe = true
	I0308 03:44:28.533208  944177 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0308 03:44:28.533216  944177 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0308 03:44:28.533220  944177 command_runner.go:130] > # internal_repair = false
	I0308 03:44:28.533227  944177 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0308 03:44:28.533233  944177 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0308 03:44:28.533240  944177 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0308 03:44:28.533245  944177 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0308 03:44:28.533252  944177 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0308 03:44:28.533255  944177 command_runner.go:130] > [crio.api]
	I0308 03:44:28.533260  944177 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0308 03:44:28.533267  944177 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0308 03:44:28.533279  944177 command_runner.go:130] > # IP address on which the stream server will listen.
	I0308 03:44:28.533284  944177 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0308 03:44:28.533290  944177 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0308 03:44:28.533295  944177 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0308 03:44:28.533301  944177 command_runner.go:130] > # stream_port = "0"
	I0308 03:44:28.533306  944177 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0308 03:44:28.533310  944177 command_runner.go:130] > # stream_enable_tls = false
	I0308 03:44:28.533318  944177 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0308 03:44:28.533322  944177 command_runner.go:130] > # stream_idle_timeout = ""
	I0308 03:44:28.533328  944177 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0308 03:44:28.533338  944177 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0308 03:44:28.533343  944177 command_runner.go:130] > # minutes.
	I0308 03:44:28.533346  944177 command_runner.go:130] > # stream_tls_cert = ""
	I0308 03:44:28.533354  944177 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0308 03:44:28.533360  944177 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0308 03:44:28.533367  944177 command_runner.go:130] > # stream_tls_key = ""
	I0308 03:44:28.533373  944177 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0308 03:44:28.533381  944177 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0308 03:44:28.533402  944177 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0308 03:44:28.533408  944177 command_runner.go:130] > # stream_tls_ca = ""
	I0308 03:44:28.533416  944177 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0308 03:44:28.533427  944177 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0308 03:44:28.533436  944177 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0308 03:44:28.533440  944177 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0308 03:44:28.533446  944177 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0308 03:44:28.533454  944177 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0308 03:44:28.533458  944177 command_runner.go:130] > [crio.runtime]
	I0308 03:44:28.533463  944177 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0308 03:44:28.533470  944177 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0308 03:44:28.533474  944177 command_runner.go:130] > # "nofile=1024:2048"
	I0308 03:44:28.533480  944177 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0308 03:44:28.533485  944177 command_runner.go:130] > # default_ulimits = [
	I0308 03:44:28.533488  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533496  944177 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0308 03:44:28.533501  944177 command_runner.go:130] > # no_pivot = false
	I0308 03:44:28.533507  944177 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0308 03:44:28.533515  944177 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0308 03:44:28.533520  944177 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0308 03:44:28.533527  944177 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0308 03:44:28.533532  944177 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0308 03:44:28.533538  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0308 03:44:28.533545  944177 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0308 03:44:28.533549  944177 command_runner.go:130] > # Cgroup setting for conmon
	I0308 03:44:28.533558  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0308 03:44:28.533562  944177 command_runner.go:130] > conmon_cgroup = "pod"
	I0308 03:44:28.533567  944177 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0308 03:44:28.533573  944177 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0308 03:44:28.533582  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0308 03:44:28.533588  944177 command_runner.go:130] > conmon_env = [
	I0308 03:44:28.533593  944177 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0308 03:44:28.533597  944177 command_runner.go:130] > ]
	I0308 03:44:28.533602  944177 command_runner.go:130] > # Additional environment variables to set for all the
	I0308 03:44:28.533609  944177 command_runner.go:130] > # containers. These are overridden if set in the
	I0308 03:44:28.533614  944177 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0308 03:44:28.533620  944177 command_runner.go:130] > # default_env = [
	I0308 03:44:28.533629  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533637  944177 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0308 03:44:28.533649  944177 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0308 03:44:28.533656  944177 command_runner.go:130] > # selinux = false
	I0308 03:44:28.533662  944177 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0308 03:44:28.533670  944177 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0308 03:44:28.533675  944177 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0308 03:44:28.533682  944177 command_runner.go:130] > # seccomp_profile = ""
	I0308 03:44:28.533687  944177 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0308 03:44:28.533694  944177 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0308 03:44:28.533700  944177 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0308 03:44:28.533707  944177 command_runner.go:130] > # which might increase security.
	I0308 03:44:28.533711  944177 command_runner.go:130] > # This option is currently deprecated,
	I0308 03:44:28.533719  944177 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0308 03:44:28.533724  944177 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0308 03:44:28.533729  944177 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0308 03:44:28.533735  944177 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0308 03:44:28.533744  944177 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0308 03:44:28.533749  944177 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0308 03:44:28.533757  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.533761  944177 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0308 03:44:28.533767  944177 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0308 03:44:28.533771  944177 command_runner.go:130] > # the cgroup blockio controller.
	I0308 03:44:28.533776  944177 command_runner.go:130] > # blockio_config_file = ""
	I0308 03:44:28.533784  944177 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0308 03:44:28.533789  944177 command_runner.go:130] > # blockio parameters.
	I0308 03:44:28.533793  944177 command_runner.go:130] > # blockio_reload = false
	I0308 03:44:28.533802  944177 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0308 03:44:28.533806  944177 command_runner.go:130] > # irqbalance daemon.
	I0308 03:44:28.533813  944177 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0308 03:44:28.533821  944177 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0308 03:44:28.533830  944177 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0308 03:44:28.533836  944177 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0308 03:44:28.533844  944177 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0308 03:44:28.533850  944177 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0308 03:44:28.533858  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.533862  944177 command_runner.go:130] > # rdt_config_file = ""
	I0308 03:44:28.533869  944177 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0308 03:44:28.533879  944177 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0308 03:44:28.533919  944177 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0308 03:44:28.533927  944177 command_runner.go:130] > # separate_pull_cgroup = ""
	I0308 03:44:28.533933  944177 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0308 03:44:28.533938  944177 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0308 03:44:28.533942  944177 command_runner.go:130] > # will be added.
	I0308 03:44:28.533946  944177 command_runner.go:130] > # default_capabilities = [
	I0308 03:44:28.533949  944177 command_runner.go:130] > # 	"CHOWN",
	I0308 03:44:28.533953  944177 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0308 03:44:28.533956  944177 command_runner.go:130] > # 	"FSETID",
	I0308 03:44:28.533959  944177 command_runner.go:130] > # 	"FOWNER",
	I0308 03:44:28.533963  944177 command_runner.go:130] > # 	"SETGID",
	I0308 03:44:28.533968  944177 command_runner.go:130] > # 	"SETUID",
	I0308 03:44:28.533972  944177 command_runner.go:130] > # 	"SETPCAP",
	I0308 03:44:28.533976  944177 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0308 03:44:28.533979  944177 command_runner.go:130] > # 	"KILL",
	I0308 03:44:28.533984  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533991  944177 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0308 03:44:28.534000  944177 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0308 03:44:28.534004  944177 command_runner.go:130] > # add_inheritable_capabilities = false
	I0308 03:44:28.534010  944177 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0308 03:44:28.534018  944177 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0308 03:44:28.534022  944177 command_runner.go:130] > # default_sysctls = [
	I0308 03:44:28.534028  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534032  944177 command_runner.go:130] > # List of devices on the host that a
	I0308 03:44:28.534038  944177 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0308 03:44:28.534044  944177 command_runner.go:130] > # allowed_devices = [
	I0308 03:44:28.534048  944177 command_runner.go:130] > # 	"/dev/fuse",
	I0308 03:44:28.534051  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534056  944177 command_runner.go:130] > # List of additional devices. specified as
	I0308 03:44:28.534063  944177 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0308 03:44:28.534070  944177 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0308 03:44:28.534076  944177 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0308 03:44:28.534084  944177 command_runner.go:130] > # additional_devices = [
	I0308 03:44:28.534088  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534094  944177 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0308 03:44:28.534103  944177 command_runner.go:130] > # cdi_spec_dirs = [
	I0308 03:44:28.534109  944177 command_runner.go:130] > # 	"/etc/cdi",
	I0308 03:44:28.534113  944177 command_runner.go:130] > # 	"/var/run/cdi",
	I0308 03:44:28.534118  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534128  944177 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0308 03:44:28.534137  944177 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0308 03:44:28.534140  944177 command_runner.go:130] > # Defaults to false.
	I0308 03:44:28.534145  944177 command_runner.go:130] > # device_ownership_from_security_context = false
	I0308 03:44:28.534152  944177 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0308 03:44:28.534159  944177 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0308 03:44:28.534169  944177 command_runner.go:130] > # hooks_dir = [
	I0308 03:44:28.534174  944177 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0308 03:44:28.534177  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534183  944177 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0308 03:44:28.534191  944177 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0308 03:44:28.534197  944177 command_runner.go:130] > # its default mounts from the following two files:
	I0308 03:44:28.534202  944177 command_runner.go:130] > #
	I0308 03:44:28.534208  944177 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0308 03:44:28.534216  944177 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0308 03:44:28.534221  944177 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0308 03:44:28.534224  944177 command_runner.go:130] > #
	I0308 03:44:28.534230  944177 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0308 03:44:28.534238  944177 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0308 03:44:28.534244  944177 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0308 03:44:28.534251  944177 command_runner.go:130] > #      only add mounts it finds in this file.
	I0308 03:44:28.534255  944177 command_runner.go:130] > #
	I0308 03:44:28.534261  944177 command_runner.go:130] > # default_mounts_file = ""
	I0308 03:44:28.534267  944177 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0308 03:44:28.534274  944177 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0308 03:44:28.534277  944177 command_runner.go:130] > pids_limit = 1024
	I0308 03:44:28.534283  944177 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0308 03:44:28.534294  944177 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0308 03:44:28.534307  944177 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0308 03:44:28.534314  944177 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0308 03:44:28.534321  944177 command_runner.go:130] > # log_size_max = -1
	I0308 03:44:28.534328  944177 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0308 03:44:28.534343  944177 command_runner.go:130] > # log_to_journald = false
	I0308 03:44:28.534351  944177 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0308 03:44:28.534356  944177 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0308 03:44:28.534364  944177 command_runner.go:130] > # Path to directory for container attach sockets.
	I0308 03:44:28.534369  944177 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0308 03:44:28.534377  944177 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0308 03:44:28.534380  944177 command_runner.go:130] > # bind_mount_prefix = ""
	I0308 03:44:28.534388  944177 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0308 03:44:28.534392  944177 command_runner.go:130] > # read_only = false
	I0308 03:44:28.534398  944177 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0308 03:44:28.534406  944177 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0308 03:44:28.534410  944177 command_runner.go:130] > # live configuration reload.
	I0308 03:44:28.534416  944177 command_runner.go:130] > # log_level = "info"
	I0308 03:44:28.534421  944177 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0308 03:44:28.534428  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.534432  944177 command_runner.go:130] > # log_filter = ""
	I0308 03:44:28.534440  944177 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0308 03:44:28.534448  944177 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0308 03:44:28.534454  944177 command_runner.go:130] > # separated by comma.
	I0308 03:44:28.534461  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534467  944177 command_runner.go:130] > # uid_mappings = ""
	I0308 03:44:28.534473  944177 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0308 03:44:28.534479  944177 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0308 03:44:28.534483  944177 command_runner.go:130] > # separated by comma.
	I0308 03:44:28.534490  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534496  944177 command_runner.go:130] > # gid_mappings = ""
	I0308 03:44:28.534502  944177 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0308 03:44:28.534510  944177 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0308 03:44:28.534515  944177 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0308 03:44:28.534525  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534529  944177 command_runner.go:130] > # minimum_mappable_uid = -1
	I0308 03:44:28.534535  944177 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0308 03:44:28.534541  944177 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0308 03:44:28.534547  944177 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0308 03:44:28.534554  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534559  944177 command_runner.go:130] > # minimum_mappable_gid = -1
	I0308 03:44:28.534570  944177 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0308 03:44:28.534580  944177 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0308 03:44:28.534595  944177 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0308 03:44:28.534602  944177 command_runner.go:130] > # ctr_stop_timeout = 30
	I0308 03:44:28.534607  944177 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0308 03:44:28.534615  944177 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0308 03:44:28.534620  944177 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0308 03:44:28.534627  944177 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0308 03:44:28.534630  944177 command_runner.go:130] > drop_infra_ctr = false
	I0308 03:44:28.534636  944177 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0308 03:44:28.534644  944177 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0308 03:44:28.534651  944177 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0308 03:44:28.534657  944177 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0308 03:44:28.534664  944177 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0308 03:44:28.534669  944177 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0308 03:44:28.534677  944177 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0308 03:44:28.534682  944177 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0308 03:44:28.534688  944177 command_runner.go:130] > # shared_cpuset = ""
	I0308 03:44:28.534694  944177 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0308 03:44:28.534701  944177 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0308 03:44:28.534705  944177 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0308 03:44:28.534714  944177 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0308 03:44:28.534720  944177 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0308 03:44:28.534725  944177 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0308 03:44:28.534731  944177 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0308 03:44:28.534735  944177 command_runner.go:130] > # enable_criu_support = false
	I0308 03:44:28.534740  944177 command_runner.go:130] > # Enable/disable the generation of the container,
	I0308 03:44:28.534746  944177 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0308 03:44:28.534752  944177 command_runner.go:130] > # enable_pod_events = false
	I0308 03:44:28.534758  944177 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0308 03:44:28.534763  944177 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0308 03:44:28.534770  944177 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0308 03:44:28.534774  944177 command_runner.go:130] > # default_runtime = "runc"
	I0308 03:44:28.534781  944177 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0308 03:44:28.534789  944177 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0308 03:44:28.534799  944177 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0308 03:44:28.534813  944177 command_runner.go:130] > # creation as a file is not desired either.
	I0308 03:44:28.534823  944177 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0308 03:44:28.534830  944177 command_runner.go:130] > # the hostname is being managed dynamically.
	I0308 03:44:28.534834  944177 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0308 03:44:28.534839  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534845  944177 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0308 03:44:28.534851  944177 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0308 03:44:28.534856  944177 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0308 03:44:28.534862  944177 command_runner.go:130] > # Each entry in the table should follow the format:
	I0308 03:44:28.534865  944177 command_runner.go:130] > #
	I0308 03:44:28.534869  944177 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0308 03:44:28.534876  944177 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0308 03:44:28.534880  944177 command_runner.go:130] > # runtime_type = "oci"
	I0308 03:44:28.534935  944177 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0308 03:44:28.534948  944177 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0308 03:44:28.534952  944177 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0308 03:44:28.534957  944177 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0308 03:44:28.534960  944177 command_runner.go:130] > # monitor_env = []
	I0308 03:44:28.534965  944177 command_runner.go:130] > # privileged_without_host_devices = false
	I0308 03:44:28.534970  944177 command_runner.go:130] > # allowed_annotations = []
	I0308 03:44:28.534976  944177 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0308 03:44:28.534981  944177 command_runner.go:130] > # Where:
	I0308 03:44:28.534986  944177 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0308 03:44:28.534994  944177 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0308 03:44:28.535000  944177 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0308 03:44:28.535009  944177 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0308 03:44:28.535012  944177 command_runner.go:130] > #   in $PATH.
	I0308 03:44:28.535018  944177 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0308 03:44:28.535025  944177 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0308 03:44:28.535031  944177 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0308 03:44:28.535037  944177 command_runner.go:130] > #   state.
	I0308 03:44:28.535043  944177 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0308 03:44:28.535051  944177 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0308 03:44:28.535057  944177 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0308 03:44:28.535065  944177 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0308 03:44:28.535071  944177 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0308 03:44:28.535085  944177 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0308 03:44:28.535091  944177 command_runner.go:130] > #   The currently recognized values are:
	I0308 03:44:28.535099  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0308 03:44:28.535108  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0308 03:44:28.535114  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0308 03:44:28.535130  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0308 03:44:28.535139  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0308 03:44:28.535145  944177 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0308 03:44:28.535154  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0308 03:44:28.535159  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0308 03:44:28.535171  944177 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0308 03:44:28.535177  944177 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0308 03:44:28.535184  944177 command_runner.go:130] > #   deprecated option "conmon".
	I0308 03:44:28.535191  944177 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0308 03:44:28.535198  944177 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0308 03:44:28.535204  944177 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0308 03:44:28.535211  944177 command_runner.go:130] > #   should be moved to the container's cgroup
	I0308 03:44:28.535217  944177 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0308 03:44:28.535224  944177 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0308 03:44:28.535230  944177 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0308 03:44:28.535238  944177 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0308 03:44:28.535241  944177 command_runner.go:130] > #
	I0308 03:44:28.535245  944177 command_runner.go:130] > # Using the seccomp notifier feature:
	I0308 03:44:28.535248  944177 command_runner.go:130] > #
	I0308 03:44:28.535254  944177 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0308 03:44:28.535262  944177 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0308 03:44:28.535265  944177 command_runner.go:130] > #
	I0308 03:44:28.535271  944177 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0308 03:44:28.535279  944177 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0308 03:44:28.535282  944177 command_runner.go:130] > #
	I0308 03:44:28.535288  944177 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0308 03:44:28.535293  944177 command_runner.go:130] > # feature.
	I0308 03:44:28.535296  944177 command_runner.go:130] > #
	I0308 03:44:28.535302  944177 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0308 03:44:28.535315  944177 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0308 03:44:28.535324  944177 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0308 03:44:28.535335  944177 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0308 03:44:28.535346  944177 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0308 03:44:28.535351  944177 command_runner.go:130] > #
	I0308 03:44:28.535357  944177 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0308 03:44:28.535364  944177 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0308 03:44:28.535367  944177 command_runner.go:130] > #
	I0308 03:44:28.535373  944177 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0308 03:44:28.535381  944177 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0308 03:44:28.535384  944177 command_runner.go:130] > #
	I0308 03:44:28.535389  944177 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0308 03:44:28.535398  944177 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0308 03:44:28.535401  944177 command_runner.go:130] > # limitation.
	I0308 03:44:28.535405  944177 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0308 03:44:28.535410  944177 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0308 03:44:28.535416  944177 command_runner.go:130] > runtime_type = "oci"
	I0308 03:44:28.535419  944177 command_runner.go:130] > runtime_root = "/run/runc"
	I0308 03:44:28.535426  944177 command_runner.go:130] > runtime_config_path = ""
	I0308 03:44:28.535431  944177 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0308 03:44:28.535434  944177 command_runner.go:130] > monitor_cgroup = "pod"
	I0308 03:44:28.535440  944177 command_runner.go:130] > monitor_exec_cgroup = ""
	I0308 03:44:28.535444  944177 command_runner.go:130] > monitor_env = [
	I0308 03:44:28.535452  944177 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0308 03:44:28.535455  944177 command_runner.go:130] > ]
	I0308 03:44:28.535460  944177 command_runner.go:130] > privileged_without_host_devices = false
	I0308 03:44:28.535469  944177 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0308 03:44:28.535473  944177 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0308 03:44:28.535479  944177 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0308 03:44:28.535488  944177 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0308 03:44:28.535495  944177 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0308 03:44:28.535503  944177 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0308 03:44:28.535512  944177 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0308 03:44:28.535521  944177 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0308 03:44:28.535526  944177 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0308 03:44:28.535534  944177 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0308 03:44:28.535540  944177 command_runner.go:130] > # Example:
	I0308 03:44:28.535544  944177 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0308 03:44:28.535554  944177 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0308 03:44:28.535559  944177 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0308 03:44:28.535565  944177 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0308 03:44:28.535571  944177 command_runner.go:130] > # cpuset = 0
	I0308 03:44:28.535574  944177 command_runner.go:130] > # cpushares = "0-1"
	I0308 03:44:28.535577  944177 command_runner.go:130] > # Where:
	I0308 03:44:28.535581  944177 command_runner.go:130] > # The workload name is workload-type.
	I0308 03:44:28.535587  944177 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0308 03:44:28.535592  944177 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0308 03:44:28.535597  944177 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0308 03:44:28.535604  944177 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0308 03:44:28.535610  944177 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0308 03:44:28.535614  944177 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0308 03:44:28.535620  944177 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0308 03:44:28.535624  944177 command_runner.go:130] > # Default value is set to true
	I0308 03:44:28.535628  944177 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0308 03:44:28.535633  944177 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0308 03:44:28.535637  944177 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0308 03:44:28.535641  944177 command_runner.go:130] > # Default value is set to 'false'
	I0308 03:44:28.535645  944177 command_runner.go:130] > # disable_hostport_mapping = false
	I0308 03:44:28.535654  944177 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0308 03:44:28.535657  944177 command_runner.go:130] > #
	I0308 03:44:28.535665  944177 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0308 03:44:28.535671  944177 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0308 03:44:28.535679  944177 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0308 03:44:28.535686  944177 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0308 03:44:28.535693  944177 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0308 03:44:28.535696  944177 command_runner.go:130] > [crio.image]
	I0308 03:44:28.535703  944177 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0308 03:44:28.535708  944177 command_runner.go:130] > # default_transport = "docker://"
	I0308 03:44:28.535713  944177 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0308 03:44:28.535722  944177 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0308 03:44:28.535725  944177 command_runner.go:130] > # global_auth_file = ""
	I0308 03:44:28.535730  944177 command_runner.go:130] > # The image used to instantiate infra containers.
	I0308 03:44:28.535737  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.535741  944177 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0308 03:44:28.535754  944177 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0308 03:44:28.535762  944177 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0308 03:44:28.535767  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.535774  944177 command_runner.go:130] > # pause_image_auth_file = ""
	I0308 03:44:28.535781  944177 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0308 03:44:28.535790  944177 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0308 03:44:28.535795  944177 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0308 03:44:28.535803  944177 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0308 03:44:28.535807  944177 command_runner.go:130] > # pause_command = "/pause"
	I0308 03:44:28.535815  944177 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0308 03:44:28.535820  944177 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0308 03:44:28.535829  944177 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0308 03:44:28.535834  944177 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0308 03:44:28.535844  944177 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0308 03:44:28.535852  944177 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0308 03:44:28.535855  944177 command_runner.go:130] > # pinned_images = [
	I0308 03:44:28.535859  944177 command_runner.go:130] > # ]
	I0308 03:44:28.535865  944177 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0308 03:44:28.535874  944177 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0308 03:44:28.535879  944177 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0308 03:44:28.535887  944177 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0308 03:44:28.535892  944177 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0308 03:44:28.535898  944177 command_runner.go:130] > # signature_policy = ""
	I0308 03:44:28.535903  944177 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0308 03:44:28.535912  944177 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0308 03:44:28.535918  944177 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0308 03:44:28.535926  944177 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0308 03:44:28.535933  944177 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0308 03:44:28.535938  944177 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0308 03:44:28.535943  944177 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0308 03:44:28.535951  944177 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0308 03:44:28.535955  944177 command_runner.go:130] > # changing them here.
	I0308 03:44:28.535959  944177 command_runner.go:130] > # insecure_registries = [
	I0308 03:44:28.535964  944177 command_runner.go:130] > # ]
	I0308 03:44:28.535970  944177 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0308 03:44:28.535977  944177 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0308 03:44:28.535986  944177 command_runner.go:130] > # image_volumes = "mkdir"
	I0308 03:44:28.535993  944177 command_runner.go:130] > # Temporary directory to use for storing big files
	I0308 03:44:28.535997  944177 command_runner.go:130] > # big_files_temporary_dir = ""
	I0308 03:44:28.536005  944177 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0308 03:44:28.536012  944177 command_runner.go:130] > # CNI plugins.
	I0308 03:44:28.536018  944177 command_runner.go:130] > [crio.network]
	I0308 03:44:28.536026  944177 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0308 03:44:28.536033  944177 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0308 03:44:28.536037  944177 command_runner.go:130] > # cni_default_network = ""
	I0308 03:44:28.536045  944177 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0308 03:44:28.536050  944177 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0308 03:44:28.536056  944177 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0308 03:44:28.536059  944177 command_runner.go:130] > # plugin_dirs = [
	I0308 03:44:28.536063  944177 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0308 03:44:28.536066  944177 command_runner.go:130] > # ]
	I0308 03:44:28.536071  944177 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0308 03:44:28.536077  944177 command_runner.go:130] > [crio.metrics]
	I0308 03:44:28.536082  944177 command_runner.go:130] > # Globally enable or disable metrics support.
	I0308 03:44:28.536088  944177 command_runner.go:130] > enable_metrics = true
	I0308 03:44:28.536092  944177 command_runner.go:130] > # Specify enabled metrics collectors.
	I0308 03:44:28.536096  944177 command_runner.go:130] > # Per default all metrics are enabled.
	I0308 03:44:28.536105  944177 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0308 03:44:28.536110  944177 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0308 03:44:28.536115  944177 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0308 03:44:28.536122  944177 command_runner.go:130] > # metrics_collectors = [
	I0308 03:44:28.536125  944177 command_runner.go:130] > # 	"operations",
	I0308 03:44:28.536130  944177 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0308 03:44:28.536137  944177 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0308 03:44:28.536141  944177 command_runner.go:130] > # 	"operations_errors",
	I0308 03:44:28.536148  944177 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0308 03:44:28.536152  944177 command_runner.go:130] > # 	"image_pulls_by_name",
	I0308 03:44:28.536156  944177 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0308 03:44:28.536160  944177 command_runner.go:130] > # 	"image_pulls_failures",
	I0308 03:44:28.536169  944177 command_runner.go:130] > # 	"image_pulls_successes",
	I0308 03:44:28.536173  944177 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0308 03:44:28.536179  944177 command_runner.go:130] > # 	"image_layer_reuse",
	I0308 03:44:28.536188  944177 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0308 03:44:28.536195  944177 command_runner.go:130] > # 	"containers_oom_total",
	I0308 03:44:28.536203  944177 command_runner.go:130] > # 	"containers_oom",
	I0308 03:44:28.536209  944177 command_runner.go:130] > # 	"processes_defunct",
	I0308 03:44:28.536213  944177 command_runner.go:130] > # 	"operations_total",
	I0308 03:44:28.536219  944177 command_runner.go:130] > # 	"operations_latency_seconds",
	I0308 03:44:28.536223  944177 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0308 03:44:28.536227  944177 command_runner.go:130] > # 	"operations_errors_total",
	I0308 03:44:28.536231  944177 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0308 03:44:28.536235  944177 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0308 03:44:28.536241  944177 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0308 03:44:28.536246  944177 command_runner.go:130] > # 	"image_pulls_success_total",
	I0308 03:44:28.536252  944177 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0308 03:44:28.536256  944177 command_runner.go:130] > # 	"containers_oom_count_total",
	I0308 03:44:28.536265  944177 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0308 03:44:28.536270  944177 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0308 03:44:28.536273  944177 command_runner.go:130] > # ]
	I0308 03:44:28.536279  944177 command_runner.go:130] > # The port on which the metrics server will listen.
	I0308 03:44:28.536283  944177 command_runner.go:130] > # metrics_port = 9090
	I0308 03:44:28.536288  944177 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0308 03:44:28.536294  944177 command_runner.go:130] > # metrics_socket = ""
	I0308 03:44:28.536298  944177 command_runner.go:130] > # The certificate for the secure metrics server.
	I0308 03:44:28.536304  944177 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0308 03:44:28.536312  944177 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0308 03:44:28.536321  944177 command_runner.go:130] > # certificate on any modification event.
	I0308 03:44:28.536327  944177 command_runner.go:130] > # metrics_cert = ""
	I0308 03:44:28.536332  944177 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0308 03:44:28.536337  944177 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0308 03:44:28.536341  944177 command_runner.go:130] > # metrics_key = ""
	I0308 03:44:28.536346  944177 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0308 03:44:28.536350  944177 command_runner.go:130] > [crio.tracing]
	I0308 03:44:28.536356  944177 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0308 03:44:28.536362  944177 command_runner.go:130] > # enable_tracing = false
	I0308 03:44:28.536367  944177 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0308 03:44:28.536374  944177 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0308 03:44:28.536380  944177 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0308 03:44:28.536390  944177 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0308 03:44:28.536397  944177 command_runner.go:130] > # CRI-O NRI configuration.
	I0308 03:44:28.536400  944177 command_runner.go:130] > [crio.nri]
	I0308 03:44:28.536404  944177 command_runner.go:130] > # Globally enable or disable NRI.
	I0308 03:44:28.536408  944177 command_runner.go:130] > # enable_nri = false
	I0308 03:44:28.536412  944177 command_runner.go:130] > # NRI socket to listen on.
	I0308 03:44:28.536418  944177 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0308 03:44:28.536422  944177 command_runner.go:130] > # NRI plugin directory to use.
	I0308 03:44:28.536429  944177 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0308 03:44:28.536434  944177 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0308 03:44:28.536439  944177 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0308 03:44:28.536444  944177 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0308 03:44:28.536451  944177 command_runner.go:130] > # nri_disable_connections = false
	I0308 03:44:28.536456  944177 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0308 03:44:28.536463  944177 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0308 03:44:28.536467  944177 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0308 03:44:28.536472  944177 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0308 03:44:28.536478  944177 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0308 03:44:28.536484  944177 command_runner.go:130] > [crio.stats]
	I0308 03:44:28.536490  944177 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0308 03:44:28.536499  944177 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0308 03:44:28.536503  944177 command_runner.go:130] > # stats_collection_period = 0
	I0308 03:44:28.536649  944177 cni.go:84] Creating CNI manager for ""
	I0308 03:44:28.536661  944177 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 03:44:28.536672  944177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:44:28.536697  944177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959285 NodeName:multinode-959285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:44:28.536854  944177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959285"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:44:28.536923  944177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:44:28.548943  944177 command_runner.go:130] > kubeadm
	I0308 03:44:28.548957  944177 command_runner.go:130] > kubectl
	I0308 03:44:28.548961  944177 command_runner.go:130] > kubelet
	I0308 03:44:28.548982  944177 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:44:28.549035  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 03:44:28.561508  944177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0308 03:44:28.582025  944177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:44:28.602326  944177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0308 03:44:28.623384  944177 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0308 03:44:28.627573  944177 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0308 03:44:28.627636  944177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:44:28.787906  944177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:44:28.804102  944177 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285 for IP: 192.168.39.174
	I0308 03:44:28.804129  944177 certs.go:194] generating shared ca certs ...
	I0308 03:44:28.804158  944177 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:44:28.804331  944177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:44:28.804369  944177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:44:28.804379  944177 certs.go:256] generating profile certs ...
	I0308 03:44:28.804459  944177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/client.key
	I0308 03:44:28.804519  944177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key.a2baa7d4
	I0308 03:44:28.804555  944177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key
	I0308 03:44:28.804566  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:44:28.804583  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:44:28.804595  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:44:28.804607  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:44:28.804621  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:44:28.804633  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:44:28.804645  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:44:28.804656  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:44:28.804713  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:44:28.804743  944177 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:44:28.804753  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:44:28.804774  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:44:28.804796  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:44:28.804816  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:44:28.804852  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:44:28.804879  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:44:28.804892  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:44:28.804904  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:28.805578  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:44:28.833052  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:44:28.859159  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:44:28.884896  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:44:28.910965  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 03:44:28.936643  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:44:28.962806  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:44:28.988186  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:44:29.014576  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:44:29.040191  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:44:29.066250  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:44:29.091835  944177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:44:29.110216  944177 ssh_runner.go:195] Run: openssl version
	I0308 03:44:29.116538  944177 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 03:44:29.116615  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:44:29.128241  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133034  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133191  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133248  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.139185  944177 command_runner.go:130] > 51391683
	I0308 03:44:29.139370  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:44:29.149326  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:44:29.160526  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165355  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165398  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165431  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.171504  944177 command_runner.go:130] > 3ec20f2e
	I0308 03:44:29.171559  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:44:29.181243  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:44:29.192425  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197084  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197258  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197313  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.203352  944177 command_runner.go:130] > b5213941
	I0308 03:44:29.203469  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:44:29.213297  944177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:44:29.218272  944177 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:44:29.218298  944177 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0308 03:44:29.218304  944177 command_runner.go:130] > Device: 253,1	Inode: 9432637     Links: 1
	I0308 03:44:29.218310  944177 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 03:44:29.218316  944177 command_runner.go:130] > Access: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218323  944177 command_runner.go:130] > Modify: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218328  944177 command_runner.go:130] > Change: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218333  944177 command_runner.go:130] >  Birth: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218375  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 03:44:29.224185  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.224231  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 03:44:29.230158  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.230234  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 03:44:29.236160  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.236392  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 03:44:29.242075  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.242218  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 03:44:29.248130  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.248169  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 03:44:29.254021  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.254229  944177 kubeadm.go:391] StartCluster: {Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:44:29.254343  944177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:44:29.254383  944177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:44:29.291322  944177 command_runner.go:130] > 17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0
	I0308 03:44:29.291363  944177 command_runner.go:130] > f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134
	I0308 03:44:29.291369  944177 command_runner.go:130] > 730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce
	I0308 03:44:29.291380  944177 command_runner.go:130] > 875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6
	I0308 03:44:29.291459  944177 command_runner.go:130] > 92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970
	I0308 03:44:29.291569  944177 command_runner.go:130] > d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c
	I0308 03:44:29.291661  944177 command_runner.go:130] > b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa
	I0308 03:44:29.291858  944177 command_runner.go:130] > dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619
	I0308 03:44:29.293334  944177 cri.go:89] found id: "17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0"
	I0308 03:44:29.293347  944177 cri.go:89] found id: "f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134"
	I0308 03:44:29.293350  944177 cri.go:89] found id: "730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce"
	I0308 03:44:29.293354  944177 cri.go:89] found id: "875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6"
	I0308 03:44:29.293356  944177 cri.go:89] found id: "92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970"
	I0308 03:44:29.293360  944177 cri.go:89] found id: "d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c"
	I0308 03:44:29.293362  944177 cri.go:89] found id: "b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa"
	I0308 03:44:29.293365  944177 cri.go:89] found id: "dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619"
	I0308 03:44:29.293367  944177 cri.go:89] found id: ""
	I0308 03:44:29.293404  944177 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.450669338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869554450645688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=772fd3c9-0f33-46bb-a20c-da495f503a80 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.451852662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbf474ed-9510-4704-8771-a00baee74809 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.451934506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbf474ed-9510-4704-8771-a00baee74809 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.452320856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbf474ed-9510-4704-8771-a00baee74809 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.510366366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c68d4a90-e714-4ba8-8acf-d3200ac4a4bd name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.510466301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c68d4a90-e714-4ba8-8acf-d3200ac4a4bd name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.511675967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=abc32929-8c80-4d4d-b39f-269994d8d98e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.512123633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869554512102093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=abc32929-8c80-4d4d-b39f-269994d8d98e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.512918629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a213a679-4a6b-4ac0-bac1-84202a05b8b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.512969902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a213a679-4a6b-4ac0-bac1-84202a05b8b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.513362968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a213a679-4a6b-4ac0-bac1-84202a05b8b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.563475317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bb235d7-707e-4cf5-b0b5-1a2e2b84395f name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.563752716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bb235d7-707e-4cf5-b0b5-1a2e2b84395f name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.565028106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61eb6ca1-776b-466c-a98d-6e2a8ac67092 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.565663691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869554565631577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61eb6ca1-776b-466c-a98d-6e2a8ac67092 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.566119610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11cca582-0609-4135-a93e-473cd630f6f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.566223036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11cca582-0609-4135-a93e-473cd630f6f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.566645936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11cca582-0609-4135-a93e-473cd630f6f5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.620683353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96fc2c5d-4d10-4ede-bf95-b7793ffd3656 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.620780274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96fc2c5d-4d10-4ede-bf95-b7793ffd3656 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.621985914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c383abe-4b21-4ec4-9eb2-5d43fac7defc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.622522585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869554622497745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c383abe-4b21-4ec4-9eb2-5d43fac7defc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.623001691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c18c7993-d154-4181-9fe4-35a8914f7360 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.623081340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c18c7993-d154-4181-9fe4-35a8914f7360 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:45:54 multinode-959285 crio[2846]: time="2024-03-08 03:45:54.623506358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c18c7993-d154-4181-9fe4-35a8914f7360 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	02d1768386163       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      44 seconds ago       Running             busybox                   1                   db73c0c33ed0e       busybox-5b5d89c9d6-g8bd8
	e2ebb36caea2e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   3425d248dd6e9       coredns-5dd5756b68-p62xk
	e89d47e2ebb7d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   05025c1e185f2       kindnet-bhngm
	711f3f6d65ab3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   7a49b5a4798ee       kube-proxy-8xrsf
	c1d633fccfd86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   fe9724edf637d       storage-provisioner
	773e5a361b281       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   e62d5ada13500       etcd-multinode-959285
	6986001b9ff7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   8fca1c8789132       kube-scheduler-multinode-959285
	fc0ed8400df6e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   cb7af5e69e23a       kube-controller-manager-multinode-959285
	04d727246d46d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   3306f288fc764       kube-apiserver-multinode-959285
	b16e4f0321c53       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   6fe4a93ab82e5       busybox-5b5d89c9d6-g8bd8
	17cc3da4fab78       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   56de50ef38281       coredns-5dd5756b68-p62xk
	f68a6db083c5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   6e04f01517180       storage-provisioner
	730a5b93ab6ff       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   49403196125f0       kindnet-bhngm
	875a418eed9d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   2a17cde2c1af7       kube-proxy-8xrsf
	92713bc5e22dd       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   ccbccd91888ca       kube-scheduler-multinode-959285
	d029bc95c326b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   009413488c812       kube-controller-manager-multinode-959285
	b6da8191bde78       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   679735dda3432       kube-apiserver-multinode-959285
	dde66ebafc3a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   a76003d8ad50d       etcd-multinode-959285
	
	
	==> coredns [17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0] <==
	[INFO] 10.244.1.2:57458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001548452s
	[INFO] 10.244.1.2:56901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123765s
	[INFO] 10.244.1.2:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102737s
	[INFO] 10.244.1.2:46569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001071597s
	[INFO] 10.244.1.2:36601 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:52737 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102611s
	[INFO] 10.244.1.2:49515 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117871s
	[INFO] 10.244.0.3:41977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111507s
	[INFO] 10.244.0.3:34458 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077889s
	[INFO] 10.244.0.3:37732 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160261s
	[INFO] 10.244.0.3:45749 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011699s
	[INFO] 10.244.1.2:57295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132029s
	[INFO] 10.244.1.2:46741 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160195s
	[INFO] 10.244.1.2:49446 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081183s
	[INFO] 10.244.1.2:45135 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156419s
	[INFO] 10.244.0.3:36952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108031s
	[INFO] 10.244.0.3:51440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159041s
	[INFO] 10.244.0.3:35793 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095292s
	[INFO] 10.244.0.3:32780 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120001s
	[INFO] 10.244.1.2:55848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153431s
	[INFO] 10.244.1.2:45603 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124773s
	[INFO] 10.244.1.2:41815 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142748s
	[INFO] 10.244.1.2:58363 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012177s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46244 - 2113 "HINFO IN 2316823638841521581.2085409816769772346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0107719s
	
	
	==> describe nodes <==
	Name:               multinode-959285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=multinode-959285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_38_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:38:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959285
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:45:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    multinode-959285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 718ff2e15eda48259630038532e2e785
	  System UUID:                718ff2e1-5eda-4825-9630-038532e2e785
	  Boot ID:                    c0ccbce6-e354-4420-9ba8-b8aac7c8ade4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-g8bd8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-5dd5756b68-p62xk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m25s
	  kube-system                 etcd-multinode-959285                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m37s
	  kube-system                 kindnet-bhngm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m25s
	  kube-system                 kube-apiserver-multinode-959285             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-controller-manager-multinode-959285    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-proxy-8xrsf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-scheduler-multinode-959285             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m24s              kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m37s              kubelet          Node multinode-959285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m37s              kubelet          Node multinode-959285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s              kubelet          Node multinode-959285 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m37s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m25s              node-controller  Node multinode-959285 event: Registered Node multinode-959285 in Controller
	  Normal  NodeReady                7m21s              kubelet          Node multinode-959285 status is now: NodeReady
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 84s)  kubelet          Node multinode-959285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 84s)  kubelet          Node multinode-959285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 84s)  kubelet          Node multinode-959285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node multinode-959285 event: Registered Node multinode-959285 in Controller
	
	
	Name:               multinode-959285-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959285-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=multinode-959285
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_45_17_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:45:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959285-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:45:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:45:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:45:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:45:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:45:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-959285-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7310cc09066c4521b42a476c8dc18cee
	  System UUID:                7310cc09-066c-4521-b42a-476c8dc18cee
	  Boot ID:                    75d8bbea-6d94-42b4-bada-8cc518a107d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-rrf76    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kindnet-97wl4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m43s
	  kube-system                 kube-proxy-vsgll            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 35s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m43s (x5 over 6m45s)  kubelet     Node multinode-959285-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x5 over 6m45s)  kubelet     Node multinode-959285-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x5 over 6m45s)  kubelet     Node multinode-959285-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m37s                  kubelet     Node multinode-959285-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  37s (x5 over 39s)      kubelet     Node multinode-959285-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x5 over 39s)      kubelet     Node multinode-959285-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x5 over 39s)      kubelet     Node multinode-959285-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                32s                    kubelet     Node multinode-959285-m02 status is now: NodeReady
	
	
	Name:               multinode-959285-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959285-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=multinode-959285
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_45_46_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:45:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-959285-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:45:51 +0000   Fri, 08 Mar 2024 03:45:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:45:51 +0000   Fri, 08 Mar 2024 03:45:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:45:51 +0000   Fri, 08 Mar 2024 03:45:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:45:51 +0000   Fri, 08 Mar 2024 03:45:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    multinode-959285-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 2712f7f6b46e45df9783c1b9b42aee01
	  System UUID:                2712f7f6-b46e-45df-9783-c1b9b42aee01
	  Boot ID:                    d687d7e3-bbf7-4b2e-94bc-57953f96066a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jtsrw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m2s
	  kube-system                 kube-proxy-6k8t9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  Starting                 6s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m2s (x5 over 6m3s)    kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x5 over 6m3s)    kubelet          Node multinode-959285-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x5 over 6m3s)    kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m55s                  kubelet          Node multinode-959285-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m21s (x5 over 5m23s)  kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m21s (x5 over 5m23s)  kubelet          Node multinode-959285-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m21s (x5 over 5m23s)  kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m16s                  kubelet          Node multinode-959285-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  10s (x5 over 12s)      kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x5 over 12s)      kubelet          Node multinode-959285-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x5 over 12s)      kubelet          Node multinode-959285-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-959285-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062764] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.175883] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.143623] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269744] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Mar 8 03:38] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060640] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.878074] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +1.411474] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.376047] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.079954] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.642629] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.195499] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[Mar 8 03:39] kauditd_printk_skb: 82 callbacks suppressed
	[Mar 8 03:44] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +0.147894] systemd-fstab-generator[2779]: Ignoring "noauto" option for root device
	[  +0.178341] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.152148] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.269491] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[ +10.267011] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +0.087807] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.853925] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.742604] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.562792] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.005031] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[Mar 8 03:45] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d] <==
	{"level":"info","ts":"2024-03-08T03:44:32.576687Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T03:44:32.576816Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-08T03:44:32.577243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2024-03-08T03:44:32.579431Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-03-08T03:44:32.579581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:44:32.579631Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:44:32.599985Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T03:44:32.600338Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T03:44:32.600395Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:44:32.600566Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:44:32.600599Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:44:33.825765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.82583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.825847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.825858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.831666Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-959285 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:44:33.831833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:44:33.833206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:44:33.838391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:44:33.839566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-03-08T03:44:33.843335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:44:33.843374Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619] <==
	{"level":"info","ts":"2024-03-08T03:38:11.829571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T03:38:11.829596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-03-08T03:38:11.834521Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-959285 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:38:11.834715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:38:11.835734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-03-08T03:38:11.838395Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.838565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:38:11.83947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:38:11.84435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:38:11.844396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T03:38:11.84469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.844846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.847322Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:39:20.569585Z","caller":"traceutil/trace.go:171","msg":"trace[1456353751] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"139.47395ms","start":"2024-03-08T03:39:20.430069Z","end":"2024-03-08T03:39:20.569543Z","steps":["trace[1456353751] 'process raft request'  (duration: 139.305851ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:40:38.264721Z","caller":"traceutil/trace.go:171","msg":"trace[1413497469] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"182.688088ms","start":"2024-03-08T03:40:38.082004Z","end":"2024-03-08T03:40:38.264692Z","steps":["trace[1413497469] 'process raft request'  (duration: 182.348657ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:42:46.32746Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-08T03:42:46.327806Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-959285","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"]}
	{"level":"warn","ts":"2024-03-08T03:42:46.327985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.328101Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.414187Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.174:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.414303Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.174:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T03:42:46.414367Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"72f328261b8d7407","current-leader-member-id":"72f328261b8d7407"}
	{"level":"info","ts":"2024-03-08T03:42:46.416503Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.41663Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.416639Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-959285","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"]}
	
	
	==> kernel <==
	 03:45:55 up 8 min,  0 users,  load average: 0.16, 0.15, 0.09
	Linux multinode-959285 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce] <==
	I0308 03:42:03.460159       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:13.466173       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:13.466228       1 main.go:227] handling current node
	I0308 03:42:13.466249       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:13.466308       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:13.466448       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:13.466481       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:23.473611       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:23.473665       1 main.go:227] handling current node
	I0308 03:42:23.473675       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:23.473681       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:23.473813       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:23.473897       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:33.482224       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:33.482344       1 main.go:227] handling current node
	I0308 03:42:33.482364       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:33.482381       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:33.482515       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:33.482555       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:43.487642       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:43.487745       1 main.go:227] handling current node
	I0308 03:42:43.487778       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:43.487798       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:43.487932       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:43.487954       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1] <==
	I0308 03:45:07.703999       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:45:17.713943       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:45:17.713988       1 main.go:227] handling current node
	I0308 03:45:17.713998       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:45:17.714004       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:45:17.714178       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:45:17.714212       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:45:27.728199       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:45:27.728347       1 main.go:227] handling current node
	I0308 03:45:27.728370       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:45:27.728377       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:45:27.728518       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:45:27.728554       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:45:37.734993       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:45:37.735055       1 main.go:227] handling current node
	I0308 03:45:37.735073       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:45:37.735079       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:45:37.735196       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:45:37.735241       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:45:47.740896       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:45:47.740943       1 main.go:227] handling current node
	I0308 03:45:47.740953       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:45:47.740960       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:45:47.741107       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:45:47.741140       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3] <==
	I0308 03:44:35.326499       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 03:44:35.343361       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 03:44:35.343457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 03:44:35.418089       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 03:44:35.418350       1 aggregator.go:166] initial CRD sync complete...
	I0308 03:44:35.418407       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 03:44:35.418431       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 03:44:35.430526       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:44:35.505093       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 03:44:35.510673       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:44:35.511406       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 03:44:35.511446       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 03:44:35.511937       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 03:44:35.514158       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 03:44:35.516892       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 03:44:35.518582       1 cache.go:39] Caches are synced for autoregister controller
	E0308 03:44:35.522555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0308 03:44:36.365205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 03:44:37.934969       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 03:44:38.056236       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 03:44:38.067920       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 03:44:38.162583       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 03:44:38.178171       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 03:44:47.842525       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 03:44:47.901130       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa] <==
	I0308 03:42:46.369643       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0308 03:42:46.369656       1 establishing_controller.go:87] Shutting down EstablishingController
	I0308 03:42:46.369706       1 naming_controller.go:302] Shutting down NamingConditionController
	I0308 03:42:46.369728       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0308 03:42:46.369780       1 available_controller.go:439] Shutting down AvailableConditionController
	I0308 03:42:46.369845       1 controller.go:129] Ending legacy_token_tracking_controller
	I0308 03:42:46.369879       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0308 03:42:46.369931       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0308 03:42:46.369977       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 03:42:46.370000       1 autoregister_controller.go:165] Shutting down autoregister controller
	W0308 03:42:46.370099       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 03:42:46.370156       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0308 03:42:46.370181       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0308 03:42:46.370581       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0308 03:42:46.370653       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	W0308 03:42:46.370863       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 03:42:46.370954       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0308 03:42:46.371205       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.371962       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372068       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372418       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372492       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372553       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372605       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372657       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c] <==
	I0308 03:39:53.911010       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:39:53.913736       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:39:53.926614       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.2.0/24"]
	I0308 03:39:53.950400       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6k8t9"
	I0308 03:39:53.950592       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jtsrw"
	I0308 03:39:54.036356       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959285-m03"
	I0308 03:39:54.036854       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:40:00.712630       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:31.899873       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:34.061893       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-959285-m03 event: Removing Node multinode-959285-m03 from Controller"
	I0308 03:40:34.400347       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:34.400817       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:40:34.423494       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.3.0/24"]
	I0308 03:40:39.062945       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:40:39.692689       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:41:24.096395       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:41:24.096734       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-959285-m03 status is now: NodeNotReady"
	I0308 03:41:24.106452       1 event.go:307] "Event occurred" object="multinode-959285-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-959285-m02 status is now: NodeNotReady"
	I0308 03:41:24.113198       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-6k8t9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.130921       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vsgll" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.141577       1 event.go:307] "Event occurred" object="kube-system/kindnet-jtsrw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.158465       1 event.go:307] "Event occurred" object="kube-system/kindnet-97wl4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.172640       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-mmt2r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.182081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.312754ms"
	I0308 03:41:24.182178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.604µs"
	
	
	==> kube-controller-manager [fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68] <==
	I0308 03:45:11.279188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.771095ms"
	I0308 03:45:11.293058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.715ms"
	I0308 03:45:11.293332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="147.948µs"
	I0308 03:45:17.024116       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m02\" does not exist"
	I0308 03:45:17.025004       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-mmt2r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-mmt2r"
	I0308 03:45:17.041062       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m02" podCIDRs=["10.244.1.0/24"]
	I0308 03:45:17.527477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="79.102µs"
	I0308 03:45:17.548854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.338µs"
	I0308 03:45:17.578772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.995µs"
	I0308 03:45:17.587682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.488µs"
	I0308 03:45:17.592550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="297.121µs"
	I0308 03:45:18.797709       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="41.53µs"
	I0308 03:45:22.650129       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:22.676617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.294µs"
	I0308 03:45:22.697754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.369µs"
	I0308 03:45:22.955940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-rrf76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-rrf76"
	I0308 03:45:24.854936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.719778ms"
	I0308 03:45:24.855163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.195µs"
	I0308 03:45:42.676565       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:42.959461       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-959285-m03 event: Removing Node multinode-959285-m03 from Controller"
	I0308 03:45:45.309465       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:45:45.312505       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:45.323699       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.2.0/24"]
	I0308 03:45:47.960376       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:45:51.400614       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	
	
	==> kube-proxy [711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37] <==
	I0308 03:44:36.969819       1 server_others.go:69] "Using iptables proxy"
	I0308 03:44:36.990817       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0308 03:44:37.046696       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:44:37.046748       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:44:37.056475       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:44:37.056741       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:44:37.059170       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:44:37.059419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:44:37.067630       1 config.go:188] "Starting service config controller"
	I0308 03:44:37.067644       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:44:37.067664       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:44:37.067668       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:44:37.067921       1 config.go:315] "Starting node config controller"
	I0308 03:44:37.067927       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:44:37.168549       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:44:37.168574       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:44:37.168595       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6] <==
	I0308 03:38:30.340335       1 server_others.go:69] "Using iptables proxy"
	I0308 03:38:30.356554       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0308 03:38:30.538113       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:38:30.538159       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:38:30.542183       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:38:30.542315       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:38:30.542645       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:38:30.542678       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:38:30.543935       1 config.go:188] "Starting service config controller"
	I0308 03:38:30.543985       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:38:30.544003       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:38:30.544006       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:38:30.546025       1 config.go:315] "Starting node config controller"
	I0308 03:38:30.546067       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:38:30.646358       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:38:30.646390       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:38:30.646411       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343] <==
	I0308 03:44:33.109533       1 serving.go:348] Generated self-signed cert in-memory
	W0308 03:44:35.360709       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 03:44:35.360766       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:44:35.360777       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 03:44:35.360784       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 03:44:35.434964       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 03:44:35.435016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:44:35.438834       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 03:44:35.438892       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:44:35.443043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 03:44:35.443133       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:44:35.539713       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970] <==
	W0308 03:38:14.828787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:38:14.828857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:38:14.914038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 03:38:14.914160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 03:38:14.930398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:38:14.930517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:38:14.945794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:38:14.946974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:38:14.968045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:14.968141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:14.997049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:38:14.997132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:38:15.079116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.079237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.100994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.101471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.108733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.108873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.197359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:38:15.197529       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0308 03:38:17.429188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:42:46.349917       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 03:42:46.352848       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 03:42:46.353211       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0308 03:42:46.353595       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.883936    3062 topology_manager.go:215] "Topology Admit Handler" podUID="1af93132-b76b-490c-8e4f-f9b2254b6591" podNamespace="kube-system" podName="kindnet-bhngm"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.883992    3062 topology_manager.go:215] "Topology Admit Handler" podUID="ffa19181-f180-401c-a7e2-6e0a79bf07c4" podNamespace="kube-system" podName="storage-provisioner"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.884030    3062 topology_manager.go:215] "Topology Admit Handler" podUID="ec69a733-194a-42ee-b0c1-874ad9669205" podNamespace="default" podName="busybox-5b5d89c9d6-g8bd8"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.893730    3062 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894172    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5e09ab1-b468-4143-a1ed-7b967a5c6e4c-lib-modules\") pod \"kube-proxy-8xrsf\" (UID: \"f5e09ab1-b468-4143-a1ed-7b967a5c6e4c\") " pod="kube-system/kube-proxy-8xrsf"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894228    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af93132-b76b-490c-8e4f-f9b2254b6591-lib-modules\") pod \"kindnet-bhngm\" (UID: \"1af93132-b76b-490c-8e4f-f9b2254b6591\") " pod="kube-system/kindnet-bhngm"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894250    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ffa19181-f180-401c-a7e2-6e0a79bf07c4-tmp\") pod \"storage-provisioner\" (UID: \"ffa19181-f180-401c-a7e2-6e0a79bf07c4\") " pod="kube-system/storage-provisioner"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894324    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5e09ab1-b468-4143-a1ed-7b967a5c6e4c-xtables-lock\") pod \"kube-proxy-8xrsf\" (UID: \"f5e09ab1-b468-4143-a1ed-7b967a5c6e4c\") " pod="kube-system/kube-proxy-8xrsf"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894353    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1af93132-b76b-490c-8e4f-f9b2254b6591-cni-cfg\") pod \"kindnet-bhngm\" (UID: \"1af93132-b76b-490c-8e4f-f9b2254b6591\") " pod="kube-system/kindnet-bhngm"
	Mar 08 03:44:35 multinode-959285 kubelet[3062]: I0308 03:44:35.894395    3062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af93132-b76b-490c-8e4f-f9b2254b6591-xtables-lock\") pod \"kindnet-bhngm\" (UID: \"1af93132-b76b-490c-8e4f-f9b2254b6591\") " pod="kube-system/kindnet-bhngm"
	Mar 08 03:44:41 multinode-959285 kubelet[3062]: I0308 03:44:41.729947    3062 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 08 03:45:30 multinode-959285 kubelet[3062]: E0308 03:45:30.929658    3062 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:45:30 multinode-959285 kubelet[3062]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:45:30 multinode-959285 kubelet[3062]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:45:30 multinode-959285 kubelet[3062]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:45:30 multinode-959285 kubelet[3062]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.008330    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1f5416aad369f6cddede6bd4ab947efa/crio-a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Error finding container a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Status 404 returned error can't find the container with id a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.008757    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1ad688533e699094d997283fbe8a1b36/crio-009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Error finding container 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Status 404 returned error can't find the container with id 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.009075    3062 manager.go:1106] Failed to create existing container: /kubepods/pod1af93132-b76b-490c-8e4f-f9b2254b6591/crio-49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Error finding container 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Status 404 returned error can't find the container with id 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.009600    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podec69a733-194a-42ee-b0c1-874ad9669205/crio-6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Error finding container 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Status 404 returned error can't find the container with id 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.009863    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/poddf2c7c193d0891f806d896d9937dca89/crio-ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Error finding container ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Status 404 returned error can't find the container with id ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.010051    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podffa19181-f180-401c-a7e2-6e0a79bf07c4/crio-6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Error finding container 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Status 404 returned error can't find the container with id 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.010455    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf755d957-2474-40b4-a0cd-2a17b2cee46d/crio-56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Error finding container 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Status 404 returned error can't find the container with id 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.010810    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf5e09ab1-b468-4143-a1ed-7b967a5c6e4c/crio-2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Error finding container 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Status 404 returned error can't find the container with id 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7
	Mar 08 03:45:31 multinode-959285 kubelet[3062]: E0308 03:45:31.011145    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod4232c0eeca9b9eb59847e7cf0198d079/crio-679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Error finding container 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Status 404 returned error can't find the container with id 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:45:54.149668  944954 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18333-911675/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-959285 -n multinode-959285
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-959285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 stop
E0308 03:47:52.009187  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959285 stop: exit status 82 (2m0.491978973s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-959285-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-959285 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959285 status: exit status 3 (18.654062028s)

                                                
                                                
-- stdout --
	multinode-959285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-959285-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:48:17.841642  945480 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0308 03:48:17.841695  945480 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-959285 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-959285 -n multinode-959285
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-959285 logs -n 25: (1.563701216s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285:/home/docker/cp-test_multinode-959285-m02_multinode-959285.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285 sudo cat                                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m02_multinode-959285.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03:/home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285-m03 sudo cat                                   | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp testdata/cp-test.txt                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285:/home/docker/cp-test_multinode-959285-m03_multinode-959285.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285 sudo cat                                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02:/home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285-m02 sudo cat                                   | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-959285 node stop m03                                                          | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	| node    | multinode-959285 node start                                                             | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| stop    | -p multinode-959285                                                                     | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| start   | -p multinode-959285                                                                     | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:42 UTC | 08 Mar 24 03:45 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC |                     |
	| node    | multinode-959285 node delete                                                            | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC | 08 Mar 24 03:45 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-959285 stop                                                                   | multinode-959285 | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:42:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:42:45.326243  944177 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:42:45.326525  944177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:42:45.326542  944177 out.go:304] Setting ErrFile to fd 2...
	I0308 03:42:45.326550  944177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:42:45.327129  944177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:42:45.328115  944177 out.go:298] Setting JSON to false
	I0308 03:42:45.329071  944177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26691,"bootTime":1709842674,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:42:45.329133  944177 start.go:139] virtualization: kvm guest
	I0308 03:42:45.331242  944177 out.go:177] * [multinode-959285] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:42:45.332551  944177 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:42:45.333891  944177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:42:45.332536  944177 notify.go:220] Checking for updates...
	I0308 03:42:45.335341  944177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:42:45.336544  944177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:42:45.337669  944177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:42:45.338758  944177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:42:45.340258  944177 config.go:182] Loaded profile config "multinode-959285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:42:45.340368  944177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:42:45.340762  944177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:42:45.340817  944177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:42:45.357046  944177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40023
	I0308 03:42:45.357564  944177 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:42:45.358202  944177 main.go:141] libmachine: Using API Version  1
	I0308 03:42:45.358223  944177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:42:45.358584  944177 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:42:45.358773  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.393367  944177 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:42:45.394513  944177 start.go:297] selected driver: kvm2
	I0308 03:42:45.394523  944177 start.go:901] validating driver "kvm2" against &{Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:42:45.394636  944177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:42:45.394953  944177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:42:45.395010  944177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:42:45.410088  944177 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:42:45.410940  944177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:42:45.411041  944177 cni.go:84] Creating CNI manager for ""
	I0308 03:42:45.411059  944177 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 03:42:45.411114  944177 start.go:340] cluster config:
	{Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:42:45.411255  944177 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:42:45.413712  944177 out.go:177] * Starting "multinode-959285" primary control-plane node in "multinode-959285" cluster
	I0308 03:42:45.414935  944177 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:42:45.414972  944177 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:42:45.414983  944177 cache.go:56] Caching tarball of preloaded images
	I0308 03:42:45.415076  944177 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:42:45.415089  944177 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 03:42:45.415206  944177 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/config.json ...
	I0308 03:42:45.415400  944177 start.go:360] acquireMachinesLock for multinode-959285: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:42:45.415445  944177 start.go:364] duration metric: took 22.911µs to acquireMachinesLock for "multinode-959285"
	I0308 03:42:45.415458  944177 start.go:96] Skipping create...Using existing machine configuration
	I0308 03:42:45.415466  944177 fix.go:54] fixHost starting: 
	I0308 03:42:45.415758  944177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:42:45.415792  944177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:42:45.429881  944177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44923
	I0308 03:42:45.430303  944177 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:42:45.430779  944177 main.go:141] libmachine: Using API Version  1
	I0308 03:42:45.430842  944177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:42:45.431197  944177 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:42:45.431452  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.431656  944177 main.go:141] libmachine: (multinode-959285) Calling .GetState
	I0308 03:42:45.433353  944177 fix.go:112] recreateIfNeeded on multinode-959285: state=Running err=<nil>
	W0308 03:42:45.433393  944177 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 03:42:45.435311  944177 out.go:177] * Updating the running kvm2 "multinode-959285" VM ...
	I0308 03:42:45.436431  944177 machine.go:94] provisionDockerMachine start ...
	I0308 03:42:45.436455  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:42:45.436700  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.439169  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.439624  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.439661  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.439748  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.439922  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.440074  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.440198  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.440380  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.440602  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.440618  944177 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 03:42:45.550914  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959285
	
	I0308 03:42:45.550948  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.551189  944177 buildroot.go:166] provisioning hostname "multinode-959285"
	I0308 03:42:45.551223  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.551394  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.554100  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.554445  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.554489  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.554580  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.554770  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.554926  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.555052  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.555232  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.555402  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.555415  944177 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-959285 && echo "multinode-959285" | sudo tee /etc/hostname
	I0308 03:42:45.685819  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-959285
	
	I0308 03:42:45.685862  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.688887  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.689338  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.689375  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.689609  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:45.689807  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.689997  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:45.690119  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:45.690277  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:45.690500  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:45.690519  944177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-959285' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-959285/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-959285' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:42:45.798639  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:42:45.798668  944177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:42:45.798686  944177 buildroot.go:174] setting up certificates
	I0308 03:42:45.798695  944177 provision.go:84] configureAuth start
	I0308 03:42:45.798707  944177 main.go:141] libmachine: (multinode-959285) Calling .GetMachineName
	I0308 03:42:45.798976  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:42:45.801477  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.801805  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.801840  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.802023  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:45.804205  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.804533  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:45.804571  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:45.804757  944177 provision.go:143] copyHostCerts
	I0308 03:42:45.804784  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:42:45.804830  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:42:45.804840  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:42:45.804904  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:42:45.804970  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:42:45.804990  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:42:45.804994  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:42:45.805016  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:42:45.805063  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:42:45.805079  944177 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:42:45.805085  944177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:42:45.805111  944177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:42:45.805159  944177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.multinode-959285 san=[127.0.0.1 192.168.39.174 localhost minikube multinode-959285]
	I0308 03:42:46.005417  944177 provision.go:177] copyRemoteCerts
	I0308 03:42:46.005491  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:42:46.005520  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:46.008149  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.008480  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:46.008501  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.008722  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:46.008929  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.009118  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:46.009254  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:42:46.096891  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0308 03:42:46.096950  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:42:46.129177  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0308 03:42:46.129227  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0308 03:42:46.158908  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0308 03:42:46.158957  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 03:42:46.188198  944177 provision.go:87] duration metric: took 389.488654ms to configureAuth
	I0308 03:42:46.188227  944177 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:42:46.188473  944177 config.go:182] Loaded profile config "multinode-959285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:42:46.188591  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:42:46.190947  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.191369  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:42:46.191395  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:42:46.191506  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:42:46.191691  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.191853  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:42:46.192048  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:42:46.192205  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:42:46.192364  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:42:46.192378  944177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:44:17.023961  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:44:17.023994  944177 machine.go:97] duration metric: took 1m31.587546513s to provisionDockerMachine
	I0308 03:44:17.024011  944177 start.go:293] postStartSetup for "multinode-959285" (driver="kvm2")
	I0308 03:44:17.024028  944177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:44:17.024062  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.024467  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:44:17.024505  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.027909  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.028374  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.028414  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.028608  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.028796  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.028966  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.029119  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.113576  944177 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:44:17.117878  944177 command_runner.go:130] > NAME=Buildroot
	I0308 03:44:17.117891  944177 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 03:44:17.117896  944177 command_runner.go:130] > ID=buildroot
	I0308 03:44:17.117900  944177 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 03:44:17.117905  944177 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 03:44:17.118031  944177 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:44:17.118068  944177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:44:17.118138  944177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:44:17.118210  944177 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:44:17.118221  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /etc/ssl/certs/9189882.pem
	I0308 03:44:17.118305  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:44:17.128647  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:44:17.153824  944177 start.go:296] duration metric: took 129.801621ms for postStartSetup
	I0308 03:44:17.153878  944177 fix.go:56] duration metric: took 1m31.738411758s for fixHost
	I0308 03:44:17.153900  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.156530  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.156913  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.156934  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.157101  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.157270  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.157471  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.157610  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.157803  944177 main.go:141] libmachine: Using SSH client type: native
	I0308 03:44:17.157992  944177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0308 03:44:17.158005  944177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:44:17.262164  944177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709869457.245029886
	
	I0308 03:44:17.262193  944177 fix.go:216] guest clock: 1709869457.245029886
	I0308 03:44:17.262202  944177 fix.go:229] Guest: 2024-03-08 03:44:17.245029886 +0000 UTC Remote: 2024-03-08 03:44:17.153885528 +0000 UTC m=+91.878096196 (delta=91.144358ms)
	I0308 03:44:17.262230  944177 fix.go:200] guest clock delta is within tolerance: 91.144358ms
	I0308 03:44:17.262237  944177 start.go:83] releasing machines lock for "multinode-959285", held for 1m31.846782767s
	I0308 03:44:17.262267  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.262537  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:44:17.265137  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.265588  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.265627  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.265698  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266311  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266535  944177 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:44:17.266633  944177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:44:17.266675  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.266784  944177 ssh_runner.go:195] Run: cat /version.json
	I0308 03:44:17.266823  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:44:17.269408  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.269804  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.269844  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.269870  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.270018  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.270191  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.270304  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:17.270329  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:17.270345  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.270492  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:44:17.270517  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.270634  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:44:17.270764  944177 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:44:17.270896  944177 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:44:17.349904  944177 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0308 03:44:17.376941  944177 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 03:44:17.377874  944177 ssh_runner.go:195] Run: systemctl --version
	I0308 03:44:17.383700  944177 command_runner.go:130] > systemd 252 (252)
	I0308 03:44:17.383729  944177 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0308 03:44:17.384041  944177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:44:17.544085  944177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 03:44:17.552561  944177 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0308 03:44:17.552718  944177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:44:17.552790  944177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:44:17.562746  944177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 03:44:17.562764  944177 start.go:494] detecting cgroup driver to use...
	I0308 03:44:17.562833  944177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:44:17.579303  944177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:44:17.593537  944177 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:44:17.593582  944177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:44:17.607527  944177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:44:17.622146  944177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:44:17.769218  944177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:44:17.918989  944177 docker.go:233] disabling docker service ...
	I0308 03:44:17.919054  944177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:44:17.934940  944177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:44:17.948677  944177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:44:18.097496  944177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:44:18.252890  944177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:44:18.270021  944177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:44:18.290297  944177 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0308 03:44:18.290724  944177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 03:44:18.290799  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.302553  944177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:44:18.302616  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.315049  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.326587  944177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:44:18.338201  944177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:44:18.349941  944177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:44:18.360110  944177 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 03:44:18.360384  944177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:44:18.370437  944177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:44:18.515250  944177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:44:28.258888  944177 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.743596376s)
	I0308 03:44:28.258927  944177 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:44:28.258992  944177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:44:28.264465  944177 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0308 03:44:28.264517  944177 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 03:44:28.264528  944177 command_runner.go:130] > Device: 0,22	Inode: 1334        Links: 1
	I0308 03:44:28.264539  944177 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 03:44:28.264545  944177 command_runner.go:130] > Access: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264553  944177 command_runner.go:130] > Modify: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264561  944177 command_runner.go:130] > Change: 2024-03-08 03:44:28.138030127 +0000
	I0308 03:44:28.264567  944177 command_runner.go:130] >  Birth: -
	I0308 03:44:28.264679  944177 start.go:562] Will wait 60s for crictl version
	I0308 03:44:28.264736  944177 ssh_runner.go:195] Run: which crictl
	I0308 03:44:28.268886  944177 command_runner.go:130] > /usr/bin/crictl
	I0308 03:44:28.269015  944177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:44:28.312939  944177 command_runner.go:130] > Version:  0.1.0
	I0308 03:44:28.312957  944177 command_runner.go:130] > RuntimeName:  cri-o
	I0308 03:44:28.312961  944177 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0308 03:44:28.312966  944177 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 03:44:28.313068  944177 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:44:28.313139  944177 ssh_runner.go:195] Run: crio --version
	I0308 03:44:28.343783  944177 command_runner.go:130] > crio version 1.29.1
	I0308 03:44:28.343799  944177 command_runner.go:130] > Version:        1.29.1
	I0308 03:44:28.343807  944177 command_runner.go:130] > GitCommit:      unknown
	I0308 03:44:28.343813  944177 command_runner.go:130] > GitCommitDate:  unknown
	I0308 03:44:28.343820  944177 command_runner.go:130] > GitTreeState:   clean
	I0308 03:44:28.343838  944177 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0308 03:44:28.343846  944177 command_runner.go:130] > GoVersion:      go1.21.6
	I0308 03:44:28.343853  944177 command_runner.go:130] > Compiler:       gc
	I0308 03:44:28.343862  944177 command_runner.go:130] > Platform:       linux/amd64
	I0308 03:44:28.343872  944177 command_runner.go:130] > Linkmode:       dynamic
	I0308 03:44:28.343880  944177 command_runner.go:130] > BuildTags:      
	I0308 03:44:28.343887  944177 command_runner.go:130] >   containers_image_ostree_stub
	I0308 03:44:28.343896  944177 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0308 03:44:28.343906  944177 command_runner.go:130] >   btrfs_noversion
	I0308 03:44:28.343915  944177 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0308 03:44:28.343926  944177 command_runner.go:130] >   libdm_no_deferred_remove
	I0308 03:44:28.343932  944177 command_runner.go:130] >   seccomp
	I0308 03:44:28.343940  944177 command_runner.go:130] > LDFlags:          unknown
	I0308 03:44:28.343947  944177 command_runner.go:130] > SeccompEnabled:   true
	I0308 03:44:28.343956  944177 command_runner.go:130] > AppArmorEnabled:  false
	I0308 03:44:28.344986  944177 ssh_runner.go:195] Run: crio --version
	I0308 03:44:28.375126  944177 command_runner.go:130] > crio version 1.29.1
	I0308 03:44:28.375147  944177 command_runner.go:130] > Version:        1.29.1
	I0308 03:44:28.375158  944177 command_runner.go:130] > GitCommit:      unknown
	I0308 03:44:28.375164  944177 command_runner.go:130] > GitCommitDate:  unknown
	I0308 03:44:28.375170  944177 command_runner.go:130] > GitTreeState:   clean
	I0308 03:44:28.375177  944177 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0308 03:44:28.375183  944177 command_runner.go:130] > GoVersion:      go1.21.6
	I0308 03:44:28.375188  944177 command_runner.go:130] > Compiler:       gc
	I0308 03:44:28.375195  944177 command_runner.go:130] > Platform:       linux/amd64
	I0308 03:44:28.375202  944177 command_runner.go:130] > Linkmode:       dynamic
	I0308 03:44:28.375214  944177 command_runner.go:130] > BuildTags:      
	I0308 03:44:28.375221  944177 command_runner.go:130] >   containers_image_ostree_stub
	I0308 03:44:28.375236  944177 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0308 03:44:28.375243  944177 command_runner.go:130] >   btrfs_noversion
	I0308 03:44:28.375258  944177 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0308 03:44:28.375265  944177 command_runner.go:130] >   libdm_no_deferred_remove
	I0308 03:44:28.375271  944177 command_runner.go:130] >   seccomp
	I0308 03:44:28.375278  944177 command_runner.go:130] > LDFlags:          unknown
	I0308 03:44:28.375288  944177 command_runner.go:130] > SeccompEnabled:   true
	I0308 03:44:28.375296  944177 command_runner.go:130] > AppArmorEnabled:  false
	I0308 03:44:28.377045  944177 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 03:44:28.378408  944177 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:44:28.380963  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:28.381339  944177 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:44:28.381368  944177 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:44:28.381625  944177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:44:28.386091  944177 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0308 03:44:28.386183  944177 kubeadm.go:877] updating cluster {Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:44:28.386312  944177 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 03:44:28.386354  944177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:44:28.438045  944177 command_runner.go:130] > {
	I0308 03:44:28.438075  944177 command_runner.go:130] >   "images": [
	I0308 03:44:28.438080  944177 command_runner.go:130] >     {
	I0308 03:44:28.438092  944177 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0308 03:44:28.438098  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438107  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0308 03:44:28.438112  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438118  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438130  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0308 03:44:28.438148  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0308 03:44:28.438154  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438164  944177 command_runner.go:130] >       "size": "65258016",
	I0308 03:44:28.438171  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438178  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438190  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438200  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438208  944177 command_runner.go:130] >     },
	I0308 03:44:28.438213  944177 command_runner.go:130] >     {
	I0308 03:44:28.438224  944177 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0308 03:44:28.438233  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438244  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0308 03:44:28.438253  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438263  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438274  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0308 03:44:28.438288  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0308 03:44:28.438297  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438304  944177 command_runner.go:130] >       "size": "65291810",
	I0308 03:44:28.438312  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438329  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438347  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438353  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438361  944177 command_runner.go:130] >     },
	I0308 03:44:28.438367  944177 command_runner.go:130] >     {
	I0308 03:44:28.438379  944177 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0308 03:44:28.438389  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438400  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0308 03:44:28.438416  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438425  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438439  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0308 03:44:28.438454  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0308 03:44:28.438462  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438468  944177 command_runner.go:130] >       "size": "1363676",
	I0308 03:44:28.438478  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438487  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438496  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438506  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438514  944177 command_runner.go:130] >     },
	I0308 03:44:28.438522  944177 command_runner.go:130] >     {
	I0308 03:44:28.438534  944177 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0308 03:44:28.438543  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438554  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0308 03:44:28.438563  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438570  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438585  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0308 03:44:28.438607  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0308 03:44:28.438617  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438625  944177 command_runner.go:130] >       "size": "31470524",
	I0308 03:44:28.438633  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438642  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438651  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438660  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438665  944177 command_runner.go:130] >     },
	I0308 03:44:28.438673  944177 command_runner.go:130] >     {
	I0308 03:44:28.438684  944177 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0308 03:44:28.438693  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438704  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0308 03:44:28.438712  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438721  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438735  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0308 03:44:28.438750  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0308 03:44:28.438759  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438768  944177 command_runner.go:130] >       "size": "53621675",
	I0308 03:44:28.438784  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.438794  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438803  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438818  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438827  944177 command_runner.go:130] >     },
	I0308 03:44:28.438835  944177 command_runner.go:130] >     {
	I0308 03:44:28.438844  944177 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0308 03:44:28.438854  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.438871  944177 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0308 03:44:28.438879  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438888  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.438901  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0308 03:44:28.438915  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0308 03:44:28.438924  944177 command_runner.go:130] >       ],
	I0308 03:44:28.438930  944177 command_runner.go:130] >       "size": "295456551",
	I0308 03:44:28.438938  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.438947  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.438956  944177 command_runner.go:130] >       },
	I0308 03:44:28.438964  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.438970  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.438978  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.438982  944177 command_runner.go:130] >     },
	I0308 03:44:28.438990  944177 command_runner.go:130] >     {
	I0308 03:44:28.438998  944177 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0308 03:44:28.439007  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439016  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0308 03:44:28.439024  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439030  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439042  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0308 03:44:28.439057  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0308 03:44:28.439064  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439070  944177 command_runner.go:130] >       "size": "127226832",
	I0308 03:44:28.439078  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439086  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439094  944177 command_runner.go:130] >       },
	I0308 03:44:28.439101  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439125  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439135  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439144  944177 command_runner.go:130] >     },
	I0308 03:44:28.439152  944177 command_runner.go:130] >     {
	I0308 03:44:28.439165  944177 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0308 03:44:28.439173  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439182  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0308 03:44:28.439191  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439199  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439231  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0308 03:44:28.439247  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0308 03:44:28.439256  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439265  944177 command_runner.go:130] >       "size": "123261750",
	I0308 03:44:28.439273  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439278  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439287  944177 command_runner.go:130] >       },
	I0308 03:44:28.439297  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439306  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439316  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439325  944177 command_runner.go:130] >     },
	I0308 03:44:28.439332  944177 command_runner.go:130] >     {
	I0308 03:44:28.439344  944177 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0308 03:44:28.439351  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439356  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0308 03:44:28.439360  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439363  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439373  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0308 03:44:28.439380  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0308 03:44:28.439384  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439387  944177 command_runner.go:130] >       "size": "74749335",
	I0308 03:44:28.439391  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.439395  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439400  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439404  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439407  944177 command_runner.go:130] >     },
	I0308 03:44:28.439411  944177 command_runner.go:130] >     {
	I0308 03:44:28.439423  944177 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0308 03:44:28.439430  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439435  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0308 03:44:28.439441  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439445  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439454  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0308 03:44:28.439463  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0308 03:44:28.439468  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439472  944177 command_runner.go:130] >       "size": "61551410",
	I0308 03:44:28.439478  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439482  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.439488  944177 command_runner.go:130] >       },
	I0308 03:44:28.439492  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439498  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439502  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.439508  944177 command_runner.go:130] >     },
	I0308 03:44:28.439512  944177 command_runner.go:130] >     {
	I0308 03:44:28.439520  944177 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0308 03:44:28.439526  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.439531  944177 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0308 03:44:28.439537  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439541  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.439550  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0308 03:44:28.439559  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0308 03:44:28.439564  944177 command_runner.go:130] >       ],
	I0308 03:44:28.439568  944177 command_runner.go:130] >       "size": "750414",
	I0308 03:44:28.439576  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.439585  944177 command_runner.go:130] >         "value": "65535"
	I0308 03:44:28.439594  944177 command_runner.go:130] >       },
	I0308 03:44:28.439603  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.439612  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.439620  944177 command_runner.go:130] >       "pinned": true
	I0308 03:44:28.439628  944177 command_runner.go:130] >     }
	I0308 03:44:28.439636  944177 command_runner.go:130] >   ]
	I0308 03:44:28.439644  944177 command_runner.go:130] > }
	I0308 03:44:28.439858  944177 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:44:28.439872  944177 crio.go:415] Images already preloaded, skipping extraction
	I0308 03:44:28.439924  944177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:44:28.476415  944177 command_runner.go:130] > {
	I0308 03:44:28.476443  944177 command_runner.go:130] >   "images": [
	I0308 03:44:28.476449  944177 command_runner.go:130] >     {
	I0308 03:44:28.476462  944177 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0308 03:44:28.476470  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476478  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0308 03:44:28.476483  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476488  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476516  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0308 03:44:28.476527  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0308 03:44:28.476531  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476535  944177 command_runner.go:130] >       "size": "65258016",
	I0308 03:44:28.476540  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476544  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476552  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476559  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476566  944177 command_runner.go:130] >     },
	I0308 03:44:28.476569  944177 command_runner.go:130] >     {
	I0308 03:44:28.476578  944177 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0308 03:44:28.476588  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476596  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0308 03:44:28.476605  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476612  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476627  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0308 03:44:28.476641  944177 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0308 03:44:28.476650  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476659  944177 command_runner.go:130] >       "size": "65291810",
	I0308 03:44:28.476668  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476684  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476693  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476703  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476711  944177 command_runner.go:130] >     },
	I0308 03:44:28.476716  944177 command_runner.go:130] >     {
	I0308 03:44:28.476729  944177 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0308 03:44:28.476738  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476748  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0308 03:44:28.476755  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476759  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476768  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0308 03:44:28.476777  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0308 03:44:28.476783  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476788  944177 command_runner.go:130] >       "size": "1363676",
	I0308 03:44:28.476794  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476798  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476820  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476828  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476831  944177 command_runner.go:130] >     },
	I0308 03:44:28.476834  944177 command_runner.go:130] >     {
	I0308 03:44:28.476840  944177 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0308 03:44:28.476846  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476852  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0308 03:44:28.476858  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476862  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.476872  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0308 03:44:28.476891  944177 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0308 03:44:28.476899  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476909  944177 command_runner.go:130] >       "size": "31470524",
	I0308 03:44:28.476918  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.476928  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.476937  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.476946  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.476953  944177 command_runner.go:130] >     },
	I0308 03:44:28.476959  944177 command_runner.go:130] >     {
	I0308 03:44:28.476972  944177 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0308 03:44:28.476981  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.476987  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0308 03:44:28.476993  944177 command_runner.go:130] >       ],
	I0308 03:44:28.476998  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477007  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0308 03:44:28.477017  944177 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0308 03:44:28.477030  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477038  944177 command_runner.go:130] >       "size": "53621675",
	I0308 03:44:28.477041  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.477048  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477052  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477059  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477062  944177 command_runner.go:130] >     },
	I0308 03:44:28.477066  944177 command_runner.go:130] >     {
	I0308 03:44:28.477072  944177 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0308 03:44:28.477079  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477088  944177 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0308 03:44:28.477094  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477098  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477107  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0308 03:44:28.477117  944177 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0308 03:44:28.477122  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477127  944177 command_runner.go:130] >       "size": "295456551",
	I0308 03:44:28.477133  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477137  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477146  944177 command_runner.go:130] >       },
	I0308 03:44:28.477152  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477156  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477162  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477166  944177 command_runner.go:130] >     },
	I0308 03:44:28.477168  944177 command_runner.go:130] >     {
	I0308 03:44:28.477174  944177 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0308 03:44:28.477181  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477186  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0308 03:44:28.477191  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477196  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477205  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0308 03:44:28.477214  944177 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0308 03:44:28.477220  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477225  944177 command_runner.go:130] >       "size": "127226832",
	I0308 03:44:28.477230  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477234  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477240  944177 command_runner.go:130] >       },
	I0308 03:44:28.477244  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477248  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477253  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477256  944177 command_runner.go:130] >     },
	I0308 03:44:28.477262  944177 command_runner.go:130] >     {
	I0308 03:44:28.477269  944177 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0308 03:44:28.477288  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477300  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0308 03:44:28.477308  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477318  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477342  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0308 03:44:28.477352  944177 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0308 03:44:28.477357  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477361  944177 command_runner.go:130] >       "size": "123261750",
	I0308 03:44:28.477367  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477371  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477377  944177 command_runner.go:130] >       },
	I0308 03:44:28.477381  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477387  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477391  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477397  944177 command_runner.go:130] >     },
	I0308 03:44:28.477400  944177 command_runner.go:130] >     {
	I0308 03:44:28.477407  944177 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0308 03:44:28.477413  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477418  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0308 03:44:28.477424  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477428  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477437  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0308 03:44:28.477446  944177 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0308 03:44:28.477454  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477461  944177 command_runner.go:130] >       "size": "74749335",
	I0308 03:44:28.477465  944177 command_runner.go:130] >       "uid": null,
	I0308 03:44:28.477471  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477475  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477482  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477487  944177 command_runner.go:130] >     },
	I0308 03:44:28.477495  944177 command_runner.go:130] >     {
	I0308 03:44:28.477505  944177 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0308 03:44:28.477514  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477524  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0308 03:44:28.477532  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477538  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477550  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0308 03:44:28.477562  944177 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0308 03:44:28.477569  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477583  944177 command_runner.go:130] >       "size": "61551410",
	I0308 03:44:28.477593  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477603  944177 command_runner.go:130] >         "value": "0"
	I0308 03:44:28.477610  944177 command_runner.go:130] >       },
	I0308 03:44:28.477617  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477625  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477633  944177 command_runner.go:130] >       "pinned": false
	I0308 03:44:28.477641  944177 command_runner.go:130] >     },
	I0308 03:44:28.477649  944177 command_runner.go:130] >     {
	I0308 03:44:28.477658  944177 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0308 03:44:28.477667  944177 command_runner.go:130] >       "repoTags": [
	I0308 03:44:28.477674  944177 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0308 03:44:28.477682  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477692  944177 command_runner.go:130] >       "repoDigests": [
	I0308 03:44:28.477705  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0308 03:44:28.477718  944177 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0308 03:44:28.477726  944177 command_runner.go:130] >       ],
	I0308 03:44:28.477734  944177 command_runner.go:130] >       "size": "750414",
	I0308 03:44:28.477743  944177 command_runner.go:130] >       "uid": {
	I0308 03:44:28.477752  944177 command_runner.go:130] >         "value": "65535"
	I0308 03:44:28.477760  944177 command_runner.go:130] >       },
	I0308 03:44:28.477765  944177 command_runner.go:130] >       "username": "",
	I0308 03:44:28.477773  944177 command_runner.go:130] >       "spec": null,
	I0308 03:44:28.477782  944177 command_runner.go:130] >       "pinned": true
	I0308 03:44:28.477790  944177 command_runner.go:130] >     }
	I0308 03:44:28.477795  944177 command_runner.go:130] >   ]
	I0308 03:44:28.477803  944177 command_runner.go:130] > }
	I0308 03:44:28.478064  944177 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 03:44:28.478085  944177 cache_images.go:84] Images are preloaded, skipping loading
	I0308 03:44:28.478093  944177 kubeadm.go:928] updating node { 192.168.39.174 8443 v1.28.4 crio true true} ...
	I0308 03:44:28.478238  944177 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-959285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:44:28.478307  944177 ssh_runner.go:195] Run: crio config
	I0308 03:44:28.514581  944177 command_runner.go:130] ! time="2024-03-08 03:44:28.497000109Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0308 03:44:28.519869  944177 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0308 03:44:28.532868  944177 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0308 03:44:28.532888  944177 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0308 03:44:28.532894  944177 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0308 03:44:28.532909  944177 command_runner.go:130] > #
	I0308 03:44:28.532916  944177 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0308 03:44:28.532927  944177 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0308 03:44:28.532938  944177 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0308 03:44:28.532947  944177 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0308 03:44:28.532953  944177 command_runner.go:130] > # reload'.
	I0308 03:44:28.532959  944177 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0308 03:44:28.532967  944177 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0308 03:44:28.532973  944177 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0308 03:44:28.532980  944177 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0308 03:44:28.532984  944177 command_runner.go:130] > [crio]
	I0308 03:44:28.532991  944177 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0308 03:44:28.532997  944177 command_runner.go:130] > # containers images, in this directory.
	I0308 03:44:28.533004  944177 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0308 03:44:28.533013  944177 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0308 03:44:28.533020  944177 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0308 03:44:28.533028  944177 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0308 03:44:28.533034  944177 command_runner.go:130] > # imagestore = ""
	I0308 03:44:28.533041  944177 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0308 03:44:28.533049  944177 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0308 03:44:28.533053  944177 command_runner.go:130] > storage_driver = "overlay"
	I0308 03:44:28.533061  944177 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0308 03:44:28.533067  944177 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0308 03:44:28.533075  944177 command_runner.go:130] > storage_option = [
	I0308 03:44:28.533080  944177 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0308 03:44:28.533083  944177 command_runner.go:130] > ]
	I0308 03:44:28.533089  944177 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0308 03:44:28.533095  944177 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0308 03:44:28.533099  944177 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0308 03:44:28.533107  944177 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0308 03:44:28.533114  944177 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0308 03:44:28.533120  944177 command_runner.go:130] > # always happen on a node reboot
	I0308 03:44:28.533125  944177 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0308 03:44:28.533138  944177 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0308 03:44:28.533147  944177 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0308 03:44:28.533152  944177 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0308 03:44:28.533177  944177 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0308 03:44:28.533189  944177 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0308 03:44:28.533196  944177 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0308 03:44:28.533200  944177 command_runner.go:130] > # internal_wipe = true
	I0308 03:44:28.533208  944177 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0308 03:44:28.533216  944177 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0308 03:44:28.533220  944177 command_runner.go:130] > # internal_repair = false
	I0308 03:44:28.533227  944177 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0308 03:44:28.533233  944177 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0308 03:44:28.533240  944177 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0308 03:44:28.533245  944177 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0308 03:44:28.533252  944177 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0308 03:44:28.533255  944177 command_runner.go:130] > [crio.api]
	I0308 03:44:28.533260  944177 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0308 03:44:28.533267  944177 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0308 03:44:28.533279  944177 command_runner.go:130] > # IP address on which the stream server will listen.
	I0308 03:44:28.533284  944177 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0308 03:44:28.533290  944177 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0308 03:44:28.533295  944177 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0308 03:44:28.533301  944177 command_runner.go:130] > # stream_port = "0"
	I0308 03:44:28.533306  944177 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0308 03:44:28.533310  944177 command_runner.go:130] > # stream_enable_tls = false
	I0308 03:44:28.533318  944177 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0308 03:44:28.533322  944177 command_runner.go:130] > # stream_idle_timeout = ""
	I0308 03:44:28.533328  944177 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0308 03:44:28.533338  944177 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0308 03:44:28.533343  944177 command_runner.go:130] > # minutes.
	I0308 03:44:28.533346  944177 command_runner.go:130] > # stream_tls_cert = ""
	I0308 03:44:28.533354  944177 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0308 03:44:28.533360  944177 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0308 03:44:28.533367  944177 command_runner.go:130] > # stream_tls_key = ""
	I0308 03:44:28.533373  944177 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0308 03:44:28.533381  944177 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0308 03:44:28.533402  944177 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0308 03:44:28.533408  944177 command_runner.go:130] > # stream_tls_ca = ""
	I0308 03:44:28.533416  944177 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0308 03:44:28.533427  944177 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0308 03:44:28.533436  944177 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0308 03:44:28.533440  944177 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0308 03:44:28.533446  944177 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0308 03:44:28.533454  944177 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0308 03:44:28.533458  944177 command_runner.go:130] > [crio.runtime]
	I0308 03:44:28.533463  944177 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0308 03:44:28.533470  944177 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0308 03:44:28.533474  944177 command_runner.go:130] > # "nofile=1024:2048"
	I0308 03:44:28.533480  944177 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0308 03:44:28.533485  944177 command_runner.go:130] > # default_ulimits = [
	I0308 03:44:28.533488  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533496  944177 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0308 03:44:28.533501  944177 command_runner.go:130] > # no_pivot = false
	I0308 03:44:28.533507  944177 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0308 03:44:28.533515  944177 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0308 03:44:28.533520  944177 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0308 03:44:28.533527  944177 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0308 03:44:28.533532  944177 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0308 03:44:28.533538  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0308 03:44:28.533545  944177 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0308 03:44:28.533549  944177 command_runner.go:130] > # Cgroup setting for conmon
	I0308 03:44:28.533558  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0308 03:44:28.533562  944177 command_runner.go:130] > conmon_cgroup = "pod"
	I0308 03:44:28.533567  944177 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0308 03:44:28.533573  944177 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0308 03:44:28.533582  944177 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0308 03:44:28.533588  944177 command_runner.go:130] > conmon_env = [
	I0308 03:44:28.533593  944177 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0308 03:44:28.533597  944177 command_runner.go:130] > ]
	I0308 03:44:28.533602  944177 command_runner.go:130] > # Additional environment variables to set for all the
	I0308 03:44:28.533609  944177 command_runner.go:130] > # containers. These are overridden if set in the
	I0308 03:44:28.533614  944177 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0308 03:44:28.533620  944177 command_runner.go:130] > # default_env = [
	I0308 03:44:28.533629  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533637  944177 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0308 03:44:28.533649  944177 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0308 03:44:28.533656  944177 command_runner.go:130] > # selinux = false
	I0308 03:44:28.533662  944177 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0308 03:44:28.533670  944177 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0308 03:44:28.533675  944177 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0308 03:44:28.533682  944177 command_runner.go:130] > # seccomp_profile = ""
	I0308 03:44:28.533687  944177 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0308 03:44:28.533694  944177 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0308 03:44:28.533700  944177 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0308 03:44:28.533707  944177 command_runner.go:130] > # which might increase security.
	I0308 03:44:28.533711  944177 command_runner.go:130] > # This option is currently deprecated,
	I0308 03:44:28.533719  944177 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0308 03:44:28.533724  944177 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0308 03:44:28.533729  944177 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0308 03:44:28.533735  944177 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0308 03:44:28.533744  944177 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0308 03:44:28.533749  944177 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0308 03:44:28.533757  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.533761  944177 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0308 03:44:28.533767  944177 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0308 03:44:28.533771  944177 command_runner.go:130] > # the cgroup blockio controller.
	I0308 03:44:28.533776  944177 command_runner.go:130] > # blockio_config_file = ""
	I0308 03:44:28.533784  944177 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0308 03:44:28.533789  944177 command_runner.go:130] > # blockio parameters.
	I0308 03:44:28.533793  944177 command_runner.go:130] > # blockio_reload = false
	I0308 03:44:28.533802  944177 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0308 03:44:28.533806  944177 command_runner.go:130] > # irqbalance daemon.
	I0308 03:44:28.533813  944177 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0308 03:44:28.533821  944177 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0308 03:44:28.533830  944177 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0308 03:44:28.533836  944177 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0308 03:44:28.533844  944177 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0308 03:44:28.533850  944177 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0308 03:44:28.533858  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.533862  944177 command_runner.go:130] > # rdt_config_file = ""
	I0308 03:44:28.533869  944177 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0308 03:44:28.533879  944177 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0308 03:44:28.533919  944177 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0308 03:44:28.533927  944177 command_runner.go:130] > # separate_pull_cgroup = ""
	I0308 03:44:28.533933  944177 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0308 03:44:28.533938  944177 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0308 03:44:28.533942  944177 command_runner.go:130] > # will be added.
	I0308 03:44:28.533946  944177 command_runner.go:130] > # default_capabilities = [
	I0308 03:44:28.533949  944177 command_runner.go:130] > # 	"CHOWN",
	I0308 03:44:28.533953  944177 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0308 03:44:28.533956  944177 command_runner.go:130] > # 	"FSETID",
	I0308 03:44:28.533959  944177 command_runner.go:130] > # 	"FOWNER",
	I0308 03:44:28.533963  944177 command_runner.go:130] > # 	"SETGID",
	I0308 03:44:28.533968  944177 command_runner.go:130] > # 	"SETUID",
	I0308 03:44:28.533972  944177 command_runner.go:130] > # 	"SETPCAP",
	I0308 03:44:28.533976  944177 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0308 03:44:28.533979  944177 command_runner.go:130] > # 	"KILL",
	I0308 03:44:28.533984  944177 command_runner.go:130] > # ]
	I0308 03:44:28.533991  944177 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0308 03:44:28.534000  944177 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0308 03:44:28.534004  944177 command_runner.go:130] > # add_inheritable_capabilities = false
	I0308 03:44:28.534010  944177 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0308 03:44:28.534018  944177 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0308 03:44:28.534022  944177 command_runner.go:130] > # default_sysctls = [
	I0308 03:44:28.534028  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534032  944177 command_runner.go:130] > # List of devices on the host that a
	I0308 03:44:28.534038  944177 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0308 03:44:28.534044  944177 command_runner.go:130] > # allowed_devices = [
	I0308 03:44:28.534048  944177 command_runner.go:130] > # 	"/dev/fuse",
	I0308 03:44:28.534051  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534056  944177 command_runner.go:130] > # List of additional devices. specified as
	I0308 03:44:28.534063  944177 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0308 03:44:28.534070  944177 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0308 03:44:28.534076  944177 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0308 03:44:28.534084  944177 command_runner.go:130] > # additional_devices = [
	I0308 03:44:28.534088  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534094  944177 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0308 03:44:28.534103  944177 command_runner.go:130] > # cdi_spec_dirs = [
	I0308 03:44:28.534109  944177 command_runner.go:130] > # 	"/etc/cdi",
	I0308 03:44:28.534113  944177 command_runner.go:130] > # 	"/var/run/cdi",
	I0308 03:44:28.534118  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534128  944177 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0308 03:44:28.534137  944177 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0308 03:44:28.534140  944177 command_runner.go:130] > # Defaults to false.
	I0308 03:44:28.534145  944177 command_runner.go:130] > # device_ownership_from_security_context = false
	I0308 03:44:28.534152  944177 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0308 03:44:28.534159  944177 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0308 03:44:28.534169  944177 command_runner.go:130] > # hooks_dir = [
	I0308 03:44:28.534174  944177 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0308 03:44:28.534177  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534183  944177 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0308 03:44:28.534191  944177 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0308 03:44:28.534197  944177 command_runner.go:130] > # its default mounts from the following two files:
	I0308 03:44:28.534202  944177 command_runner.go:130] > #
	I0308 03:44:28.534208  944177 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0308 03:44:28.534216  944177 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0308 03:44:28.534221  944177 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0308 03:44:28.534224  944177 command_runner.go:130] > #
	I0308 03:44:28.534230  944177 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0308 03:44:28.534238  944177 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0308 03:44:28.534244  944177 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0308 03:44:28.534251  944177 command_runner.go:130] > #      only add mounts it finds in this file.
	I0308 03:44:28.534255  944177 command_runner.go:130] > #
	I0308 03:44:28.534261  944177 command_runner.go:130] > # default_mounts_file = ""
	I0308 03:44:28.534267  944177 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0308 03:44:28.534274  944177 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0308 03:44:28.534277  944177 command_runner.go:130] > pids_limit = 1024
	I0308 03:44:28.534283  944177 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0308 03:44:28.534294  944177 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0308 03:44:28.534307  944177 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0308 03:44:28.534314  944177 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0308 03:44:28.534321  944177 command_runner.go:130] > # log_size_max = -1
	I0308 03:44:28.534328  944177 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0308 03:44:28.534343  944177 command_runner.go:130] > # log_to_journald = false
	I0308 03:44:28.534351  944177 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0308 03:44:28.534356  944177 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0308 03:44:28.534364  944177 command_runner.go:130] > # Path to directory for container attach sockets.
	I0308 03:44:28.534369  944177 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0308 03:44:28.534377  944177 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0308 03:44:28.534380  944177 command_runner.go:130] > # bind_mount_prefix = ""
	I0308 03:44:28.534388  944177 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0308 03:44:28.534392  944177 command_runner.go:130] > # read_only = false
	I0308 03:44:28.534398  944177 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0308 03:44:28.534406  944177 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0308 03:44:28.534410  944177 command_runner.go:130] > # live configuration reload.
	I0308 03:44:28.534416  944177 command_runner.go:130] > # log_level = "info"
	I0308 03:44:28.534421  944177 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0308 03:44:28.534428  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.534432  944177 command_runner.go:130] > # log_filter = ""
	I0308 03:44:28.534440  944177 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0308 03:44:28.534448  944177 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0308 03:44:28.534454  944177 command_runner.go:130] > # separated by comma.
	I0308 03:44:28.534461  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534467  944177 command_runner.go:130] > # uid_mappings = ""
	I0308 03:44:28.534473  944177 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0308 03:44:28.534479  944177 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0308 03:44:28.534483  944177 command_runner.go:130] > # separated by comma.
	I0308 03:44:28.534490  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534496  944177 command_runner.go:130] > # gid_mappings = ""
	I0308 03:44:28.534502  944177 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0308 03:44:28.534510  944177 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0308 03:44:28.534515  944177 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0308 03:44:28.534525  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534529  944177 command_runner.go:130] > # minimum_mappable_uid = -1
	I0308 03:44:28.534535  944177 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0308 03:44:28.534541  944177 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0308 03:44:28.534547  944177 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0308 03:44:28.534554  944177 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0308 03:44:28.534559  944177 command_runner.go:130] > # minimum_mappable_gid = -1
	I0308 03:44:28.534570  944177 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0308 03:44:28.534580  944177 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0308 03:44:28.534595  944177 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0308 03:44:28.534602  944177 command_runner.go:130] > # ctr_stop_timeout = 30
	I0308 03:44:28.534607  944177 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0308 03:44:28.534615  944177 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0308 03:44:28.534620  944177 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0308 03:44:28.534627  944177 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0308 03:44:28.534630  944177 command_runner.go:130] > drop_infra_ctr = false
	I0308 03:44:28.534636  944177 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0308 03:44:28.534644  944177 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0308 03:44:28.534651  944177 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0308 03:44:28.534657  944177 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0308 03:44:28.534664  944177 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0308 03:44:28.534669  944177 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0308 03:44:28.534677  944177 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0308 03:44:28.534682  944177 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0308 03:44:28.534688  944177 command_runner.go:130] > # shared_cpuset = ""
	I0308 03:44:28.534694  944177 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0308 03:44:28.534701  944177 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0308 03:44:28.534705  944177 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0308 03:44:28.534714  944177 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0308 03:44:28.534720  944177 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0308 03:44:28.534725  944177 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0308 03:44:28.534731  944177 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0308 03:44:28.534735  944177 command_runner.go:130] > # enable_criu_support = false
	I0308 03:44:28.534740  944177 command_runner.go:130] > # Enable/disable the generation of the container,
	I0308 03:44:28.534746  944177 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0308 03:44:28.534752  944177 command_runner.go:130] > # enable_pod_events = false
	I0308 03:44:28.534758  944177 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0308 03:44:28.534763  944177 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0308 03:44:28.534770  944177 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0308 03:44:28.534774  944177 command_runner.go:130] > # default_runtime = "runc"
	I0308 03:44:28.534781  944177 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0308 03:44:28.534789  944177 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0308 03:44:28.534799  944177 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0308 03:44:28.534813  944177 command_runner.go:130] > # creation as a file is not desired either.
	I0308 03:44:28.534823  944177 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0308 03:44:28.534830  944177 command_runner.go:130] > # the hostname is being managed dynamically.
	I0308 03:44:28.534834  944177 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0308 03:44:28.534839  944177 command_runner.go:130] > # ]
	I0308 03:44:28.534845  944177 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0308 03:44:28.534851  944177 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0308 03:44:28.534856  944177 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0308 03:44:28.534862  944177 command_runner.go:130] > # Each entry in the table should follow the format:
	I0308 03:44:28.534865  944177 command_runner.go:130] > #
	I0308 03:44:28.534869  944177 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0308 03:44:28.534876  944177 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0308 03:44:28.534880  944177 command_runner.go:130] > # runtime_type = "oci"
	I0308 03:44:28.534935  944177 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0308 03:44:28.534948  944177 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0308 03:44:28.534952  944177 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0308 03:44:28.534957  944177 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0308 03:44:28.534960  944177 command_runner.go:130] > # monitor_env = []
	I0308 03:44:28.534965  944177 command_runner.go:130] > # privileged_without_host_devices = false
	I0308 03:44:28.534970  944177 command_runner.go:130] > # allowed_annotations = []
	I0308 03:44:28.534976  944177 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0308 03:44:28.534981  944177 command_runner.go:130] > # Where:
	I0308 03:44:28.534986  944177 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0308 03:44:28.534994  944177 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0308 03:44:28.535000  944177 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0308 03:44:28.535009  944177 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0308 03:44:28.535012  944177 command_runner.go:130] > #   in $PATH.
	I0308 03:44:28.535018  944177 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0308 03:44:28.535025  944177 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0308 03:44:28.535031  944177 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0308 03:44:28.535037  944177 command_runner.go:130] > #   state.
	I0308 03:44:28.535043  944177 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0308 03:44:28.535051  944177 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0308 03:44:28.535057  944177 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0308 03:44:28.535065  944177 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0308 03:44:28.535071  944177 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0308 03:44:28.535085  944177 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0308 03:44:28.535091  944177 command_runner.go:130] > #   The currently recognized values are:
	I0308 03:44:28.535099  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0308 03:44:28.535108  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0308 03:44:28.535114  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0308 03:44:28.535130  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0308 03:44:28.535139  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0308 03:44:28.535145  944177 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0308 03:44:28.535154  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0308 03:44:28.535159  944177 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0308 03:44:28.535171  944177 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0308 03:44:28.535177  944177 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0308 03:44:28.535184  944177 command_runner.go:130] > #   deprecated option "conmon".
	I0308 03:44:28.535191  944177 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0308 03:44:28.535198  944177 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0308 03:44:28.535204  944177 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0308 03:44:28.535211  944177 command_runner.go:130] > #   should be moved to the container's cgroup
	I0308 03:44:28.535217  944177 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0308 03:44:28.535224  944177 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0308 03:44:28.535230  944177 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0308 03:44:28.535238  944177 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0308 03:44:28.535241  944177 command_runner.go:130] > #
	I0308 03:44:28.535245  944177 command_runner.go:130] > # Using the seccomp notifier feature:
	I0308 03:44:28.535248  944177 command_runner.go:130] > #
	I0308 03:44:28.535254  944177 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0308 03:44:28.535262  944177 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0308 03:44:28.535265  944177 command_runner.go:130] > #
	I0308 03:44:28.535271  944177 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0308 03:44:28.535279  944177 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0308 03:44:28.535282  944177 command_runner.go:130] > #
	I0308 03:44:28.535288  944177 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0308 03:44:28.535293  944177 command_runner.go:130] > # feature.
	I0308 03:44:28.535296  944177 command_runner.go:130] > #
	I0308 03:44:28.535302  944177 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0308 03:44:28.535315  944177 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0308 03:44:28.535324  944177 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0308 03:44:28.535335  944177 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0308 03:44:28.535346  944177 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0308 03:44:28.535351  944177 command_runner.go:130] > #
	I0308 03:44:28.535357  944177 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0308 03:44:28.535364  944177 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0308 03:44:28.535367  944177 command_runner.go:130] > #
	I0308 03:44:28.535373  944177 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0308 03:44:28.535381  944177 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0308 03:44:28.535384  944177 command_runner.go:130] > #
	I0308 03:44:28.535389  944177 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0308 03:44:28.535398  944177 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0308 03:44:28.535401  944177 command_runner.go:130] > # limitation.
	I0308 03:44:28.535405  944177 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0308 03:44:28.535410  944177 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0308 03:44:28.535416  944177 command_runner.go:130] > runtime_type = "oci"
	I0308 03:44:28.535419  944177 command_runner.go:130] > runtime_root = "/run/runc"
	I0308 03:44:28.535426  944177 command_runner.go:130] > runtime_config_path = ""
	I0308 03:44:28.535431  944177 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0308 03:44:28.535434  944177 command_runner.go:130] > monitor_cgroup = "pod"
	I0308 03:44:28.535440  944177 command_runner.go:130] > monitor_exec_cgroup = ""
	I0308 03:44:28.535444  944177 command_runner.go:130] > monitor_env = [
	I0308 03:44:28.535452  944177 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0308 03:44:28.535455  944177 command_runner.go:130] > ]
	I0308 03:44:28.535460  944177 command_runner.go:130] > privileged_without_host_devices = false
	I0308 03:44:28.535469  944177 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0308 03:44:28.535473  944177 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0308 03:44:28.535479  944177 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0308 03:44:28.535488  944177 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0308 03:44:28.535495  944177 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0308 03:44:28.535503  944177 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0308 03:44:28.535512  944177 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0308 03:44:28.535521  944177 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0308 03:44:28.535526  944177 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0308 03:44:28.535534  944177 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0308 03:44:28.535540  944177 command_runner.go:130] > # Example:
	I0308 03:44:28.535544  944177 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0308 03:44:28.535554  944177 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0308 03:44:28.535559  944177 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0308 03:44:28.535565  944177 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0308 03:44:28.535571  944177 command_runner.go:130] > # cpuset = 0
	I0308 03:44:28.535574  944177 command_runner.go:130] > # cpushares = "0-1"
	I0308 03:44:28.535577  944177 command_runner.go:130] > # Where:
	I0308 03:44:28.535581  944177 command_runner.go:130] > # The workload name is workload-type.
	I0308 03:44:28.535587  944177 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0308 03:44:28.535592  944177 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0308 03:44:28.535597  944177 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0308 03:44:28.535604  944177 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0308 03:44:28.535610  944177 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0308 03:44:28.535614  944177 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0308 03:44:28.535620  944177 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0308 03:44:28.535624  944177 command_runner.go:130] > # Default value is set to true
	I0308 03:44:28.535628  944177 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0308 03:44:28.535633  944177 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0308 03:44:28.535637  944177 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0308 03:44:28.535641  944177 command_runner.go:130] > # Default value is set to 'false'
	I0308 03:44:28.535645  944177 command_runner.go:130] > # disable_hostport_mapping = false
	I0308 03:44:28.535654  944177 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0308 03:44:28.535657  944177 command_runner.go:130] > #
	I0308 03:44:28.535665  944177 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0308 03:44:28.535671  944177 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0308 03:44:28.535679  944177 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0308 03:44:28.535686  944177 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0308 03:44:28.535693  944177 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0308 03:44:28.535696  944177 command_runner.go:130] > [crio.image]
	I0308 03:44:28.535703  944177 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0308 03:44:28.535708  944177 command_runner.go:130] > # default_transport = "docker://"
	I0308 03:44:28.535713  944177 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0308 03:44:28.535722  944177 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0308 03:44:28.535725  944177 command_runner.go:130] > # global_auth_file = ""
	I0308 03:44:28.535730  944177 command_runner.go:130] > # The image used to instantiate infra containers.
	I0308 03:44:28.535737  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.535741  944177 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0308 03:44:28.535754  944177 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0308 03:44:28.535762  944177 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0308 03:44:28.535767  944177 command_runner.go:130] > # This option supports live configuration reload.
	I0308 03:44:28.535774  944177 command_runner.go:130] > # pause_image_auth_file = ""
	I0308 03:44:28.535781  944177 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0308 03:44:28.535790  944177 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0308 03:44:28.535795  944177 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0308 03:44:28.535803  944177 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0308 03:44:28.535807  944177 command_runner.go:130] > # pause_command = "/pause"
	I0308 03:44:28.535815  944177 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0308 03:44:28.535820  944177 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0308 03:44:28.535829  944177 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0308 03:44:28.535834  944177 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0308 03:44:28.535844  944177 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0308 03:44:28.535852  944177 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0308 03:44:28.535855  944177 command_runner.go:130] > # pinned_images = [
	I0308 03:44:28.535859  944177 command_runner.go:130] > # ]
	I0308 03:44:28.535865  944177 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0308 03:44:28.535874  944177 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0308 03:44:28.535879  944177 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0308 03:44:28.535887  944177 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0308 03:44:28.535892  944177 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0308 03:44:28.535898  944177 command_runner.go:130] > # signature_policy = ""
	I0308 03:44:28.535903  944177 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0308 03:44:28.535912  944177 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0308 03:44:28.535918  944177 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0308 03:44:28.535926  944177 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0308 03:44:28.535933  944177 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0308 03:44:28.535938  944177 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0308 03:44:28.535943  944177 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0308 03:44:28.535951  944177 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0308 03:44:28.535955  944177 command_runner.go:130] > # changing them here.
	I0308 03:44:28.535959  944177 command_runner.go:130] > # insecure_registries = [
	I0308 03:44:28.535964  944177 command_runner.go:130] > # ]
	I0308 03:44:28.535970  944177 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0308 03:44:28.535977  944177 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0308 03:44:28.535986  944177 command_runner.go:130] > # image_volumes = "mkdir"
	I0308 03:44:28.535993  944177 command_runner.go:130] > # Temporary directory to use for storing big files
	I0308 03:44:28.535997  944177 command_runner.go:130] > # big_files_temporary_dir = ""
	I0308 03:44:28.536005  944177 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0308 03:44:28.536012  944177 command_runner.go:130] > # CNI plugins.
	I0308 03:44:28.536018  944177 command_runner.go:130] > [crio.network]
	I0308 03:44:28.536026  944177 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0308 03:44:28.536033  944177 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0308 03:44:28.536037  944177 command_runner.go:130] > # cni_default_network = ""
	I0308 03:44:28.536045  944177 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0308 03:44:28.536050  944177 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0308 03:44:28.536056  944177 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0308 03:44:28.536059  944177 command_runner.go:130] > # plugin_dirs = [
	I0308 03:44:28.536063  944177 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0308 03:44:28.536066  944177 command_runner.go:130] > # ]
	I0308 03:44:28.536071  944177 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0308 03:44:28.536077  944177 command_runner.go:130] > [crio.metrics]
	I0308 03:44:28.536082  944177 command_runner.go:130] > # Globally enable or disable metrics support.
	I0308 03:44:28.536088  944177 command_runner.go:130] > enable_metrics = true
	I0308 03:44:28.536092  944177 command_runner.go:130] > # Specify enabled metrics collectors.
	I0308 03:44:28.536096  944177 command_runner.go:130] > # Per default all metrics are enabled.
	I0308 03:44:28.536105  944177 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0308 03:44:28.536110  944177 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0308 03:44:28.536115  944177 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0308 03:44:28.536122  944177 command_runner.go:130] > # metrics_collectors = [
	I0308 03:44:28.536125  944177 command_runner.go:130] > # 	"operations",
	I0308 03:44:28.536130  944177 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0308 03:44:28.536137  944177 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0308 03:44:28.536141  944177 command_runner.go:130] > # 	"operations_errors",
	I0308 03:44:28.536148  944177 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0308 03:44:28.536152  944177 command_runner.go:130] > # 	"image_pulls_by_name",
	I0308 03:44:28.536156  944177 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0308 03:44:28.536160  944177 command_runner.go:130] > # 	"image_pulls_failures",
	I0308 03:44:28.536169  944177 command_runner.go:130] > # 	"image_pulls_successes",
	I0308 03:44:28.536173  944177 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0308 03:44:28.536179  944177 command_runner.go:130] > # 	"image_layer_reuse",
	I0308 03:44:28.536188  944177 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0308 03:44:28.536195  944177 command_runner.go:130] > # 	"containers_oom_total",
	I0308 03:44:28.536203  944177 command_runner.go:130] > # 	"containers_oom",
	I0308 03:44:28.536209  944177 command_runner.go:130] > # 	"processes_defunct",
	I0308 03:44:28.536213  944177 command_runner.go:130] > # 	"operations_total",
	I0308 03:44:28.536219  944177 command_runner.go:130] > # 	"operations_latency_seconds",
	I0308 03:44:28.536223  944177 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0308 03:44:28.536227  944177 command_runner.go:130] > # 	"operations_errors_total",
	I0308 03:44:28.536231  944177 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0308 03:44:28.536235  944177 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0308 03:44:28.536241  944177 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0308 03:44:28.536246  944177 command_runner.go:130] > # 	"image_pulls_success_total",
	I0308 03:44:28.536252  944177 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0308 03:44:28.536256  944177 command_runner.go:130] > # 	"containers_oom_count_total",
	I0308 03:44:28.536265  944177 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0308 03:44:28.536270  944177 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0308 03:44:28.536273  944177 command_runner.go:130] > # ]
	I0308 03:44:28.536279  944177 command_runner.go:130] > # The port on which the metrics server will listen.
	I0308 03:44:28.536283  944177 command_runner.go:130] > # metrics_port = 9090
	I0308 03:44:28.536288  944177 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0308 03:44:28.536294  944177 command_runner.go:130] > # metrics_socket = ""
	I0308 03:44:28.536298  944177 command_runner.go:130] > # The certificate for the secure metrics server.
	I0308 03:44:28.536304  944177 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0308 03:44:28.536312  944177 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0308 03:44:28.536321  944177 command_runner.go:130] > # certificate on any modification event.
	I0308 03:44:28.536327  944177 command_runner.go:130] > # metrics_cert = ""
	I0308 03:44:28.536332  944177 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0308 03:44:28.536337  944177 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0308 03:44:28.536341  944177 command_runner.go:130] > # metrics_key = ""
	I0308 03:44:28.536346  944177 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0308 03:44:28.536350  944177 command_runner.go:130] > [crio.tracing]
	I0308 03:44:28.536356  944177 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0308 03:44:28.536362  944177 command_runner.go:130] > # enable_tracing = false
	I0308 03:44:28.536367  944177 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0308 03:44:28.536374  944177 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0308 03:44:28.536380  944177 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0308 03:44:28.536390  944177 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0308 03:44:28.536397  944177 command_runner.go:130] > # CRI-O NRI configuration.
	I0308 03:44:28.536400  944177 command_runner.go:130] > [crio.nri]
	I0308 03:44:28.536404  944177 command_runner.go:130] > # Globally enable or disable NRI.
	I0308 03:44:28.536408  944177 command_runner.go:130] > # enable_nri = false
	I0308 03:44:28.536412  944177 command_runner.go:130] > # NRI socket to listen on.
	I0308 03:44:28.536418  944177 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0308 03:44:28.536422  944177 command_runner.go:130] > # NRI plugin directory to use.
	I0308 03:44:28.536429  944177 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0308 03:44:28.536434  944177 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0308 03:44:28.536439  944177 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0308 03:44:28.536444  944177 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0308 03:44:28.536451  944177 command_runner.go:130] > # nri_disable_connections = false
	I0308 03:44:28.536456  944177 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0308 03:44:28.536463  944177 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0308 03:44:28.536467  944177 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0308 03:44:28.536472  944177 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0308 03:44:28.536478  944177 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0308 03:44:28.536484  944177 command_runner.go:130] > [crio.stats]
	I0308 03:44:28.536490  944177 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0308 03:44:28.536499  944177 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0308 03:44:28.536503  944177 command_runner.go:130] > # stats_collection_period = 0
	I0308 03:44:28.536649  944177 cni.go:84] Creating CNI manager for ""
	I0308 03:44:28.536661  944177 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 03:44:28.536672  944177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:44:28.536697  944177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-959285 NodeName:multinode-959285 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:44:28.536854  944177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-959285"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:44:28.536923  944177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 03:44:28.548943  944177 command_runner.go:130] > kubeadm
	I0308 03:44:28.548957  944177 command_runner.go:130] > kubectl
	I0308 03:44:28.548961  944177 command_runner.go:130] > kubelet
	I0308 03:44:28.548982  944177 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:44:28.549035  944177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 03:44:28.561508  944177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0308 03:44:28.582025  944177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:44:28.602326  944177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0308 03:44:28.623384  944177 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0308 03:44:28.627573  944177 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0308 03:44:28.627636  944177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:44:28.787906  944177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:44:28.804102  944177 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285 for IP: 192.168.39.174
	I0308 03:44:28.804129  944177 certs.go:194] generating shared ca certs ...
	I0308 03:44:28.804158  944177 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:44:28.804331  944177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:44:28.804369  944177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:44:28.804379  944177 certs.go:256] generating profile certs ...
	I0308 03:44:28.804459  944177 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/client.key
	I0308 03:44:28.804519  944177 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key.a2baa7d4
	I0308 03:44:28.804555  944177 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key
	I0308 03:44:28.804566  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 03:44:28.804583  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0308 03:44:28.804595  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 03:44:28.804607  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 03:44:28.804621  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 03:44:28.804633  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 03:44:28.804645  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 03:44:28.804656  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 03:44:28.804713  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:44:28.804743  944177 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:44:28.804753  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:44:28.804774  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:44:28.804796  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:44:28.804816  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:44:28.804852  944177 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:44:28.804879  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem -> /usr/share/ca-certificates/918988.pem
	I0308 03:44:28.804892  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> /usr/share/ca-certificates/9189882.pem
	I0308 03:44:28.804904  944177 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:28.805578  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:44:28.833052  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:44:28.859159  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:44:28.884896  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:44:28.910965  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 03:44:28.936643  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 03:44:28.962806  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:44:28.988186  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/multinode-959285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 03:44:29.014576  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:44:29.040191  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:44:29.066250  944177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:44:29.091835  944177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:44:29.110216  944177 ssh_runner.go:195] Run: openssl version
	I0308 03:44:29.116538  944177 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 03:44:29.116615  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:44:29.128241  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133034  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133191  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.133248  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:44:29.139185  944177 command_runner.go:130] > 51391683
	I0308 03:44:29.139370  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:44:29.149326  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:44:29.160526  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165355  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165398  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.165431  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:44:29.171504  944177 command_runner.go:130] > 3ec20f2e
	I0308 03:44:29.171559  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:44:29.181243  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:44:29.192425  944177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197084  944177 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197258  944177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.197313  944177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:44:29.203352  944177 command_runner.go:130] > b5213941
	I0308 03:44:29.203469  944177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:44:29.213297  944177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:44:29.218272  944177 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:44:29.218298  944177 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0308 03:44:29.218304  944177 command_runner.go:130] > Device: 253,1	Inode: 9432637     Links: 1
	I0308 03:44:29.218310  944177 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 03:44:29.218316  944177 command_runner.go:130] > Access: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218323  944177 command_runner.go:130] > Modify: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218328  944177 command_runner.go:130] > Change: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218333  944177 command_runner.go:130] >  Birth: 2024-03-08 03:38:07.192142170 +0000
	I0308 03:44:29.218375  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 03:44:29.224185  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.224231  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 03:44:29.230158  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.230234  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 03:44:29.236160  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.236392  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 03:44:29.242075  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.242218  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 03:44:29.248130  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.248169  944177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 03:44:29.254021  944177 command_runner.go:130] > Certificate will not expire
	I0308 03:44:29.254229  944177 kubeadm.go:391] StartCluster: {Name:multinode-959285 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-959285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.175 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:44:29.254343  944177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:44:29.254383  944177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:44:29.291322  944177 command_runner.go:130] > 17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0
	I0308 03:44:29.291363  944177 command_runner.go:130] > f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134
	I0308 03:44:29.291369  944177 command_runner.go:130] > 730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce
	I0308 03:44:29.291380  944177 command_runner.go:130] > 875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6
	I0308 03:44:29.291459  944177 command_runner.go:130] > 92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970
	I0308 03:44:29.291569  944177 command_runner.go:130] > d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c
	I0308 03:44:29.291661  944177 command_runner.go:130] > b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa
	I0308 03:44:29.291858  944177 command_runner.go:130] > dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619
	I0308 03:44:29.293334  944177 cri.go:89] found id: "17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0"
	I0308 03:44:29.293347  944177 cri.go:89] found id: "f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134"
	I0308 03:44:29.293350  944177 cri.go:89] found id: "730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce"
	I0308 03:44:29.293354  944177 cri.go:89] found id: "875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6"
	I0308 03:44:29.293356  944177 cri.go:89] found id: "92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970"
	I0308 03:44:29.293360  944177 cri.go:89] found id: "d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c"
	I0308 03:44:29.293362  944177 cri.go:89] found id: "b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa"
	I0308 03:44:29.293365  944177 cri.go:89] found id: "dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619"
	I0308 03:44:29.293367  944177 cri.go:89] found id: ""
	I0308 03:44:29.293404  944177 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.594640071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c73544b-0e4c-4368-add2-37449867350e name=/runtime.v1.RuntimeService/Version
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.596748564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0d3c11b-1140-4fde-85cc-2b22ce0b1808 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.597335931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869698597312836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0d3c11b-1140-4fde-85cc-2b22ce0b1808 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.597841797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3d540a7-ff2d-4bec-abef-7db1c957b9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.597921051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3d540a7-ff2d-4bec-abef-7db1c957b9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.598346618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3d540a7-ff2d-4bec-abef-7db1c957b9a9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.641004124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1baa915-f7c6-4c2f-ada5-04fed90e28a1 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.641073062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1baa915-f7c6-4c2f-ada5-04fed90e28a1 name=/runtime.v1.RuntimeService/Version
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.642863181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7845a225-7d6c-46ad-a073-d3933b1912ea name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.643499903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869698643472792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7845a225-7d6c-46ad-a073-d3933b1912ea name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.645090722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e038bf4-4773-4dbd-895a-780684968bff name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.645145813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e038bf4-4773-4dbd-895a-780684968bff name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.645564706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e038bf4-4773-4dbd-895a-780684968bff name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.660935559Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0d0d1c3b-d4ba-4e44-9900-58c3c06d5fc2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.663698846Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-g8bd8,Uid:ec69a733-194a-42ee-b0c1-874ad9669205,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869510042571501,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:44:35.883197724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-p62xk,Uid:f755d957-2474-40b4-a0cd-2a17b2cee46d,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1709869476270184951,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:44:35.883186430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&PodSandboxMetadata{Name:kindnet-bhngm,Uid:1af93132-b76b-490c-8e4f-f9b2254b6591,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869476233611248,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-03-08T03:44:35.883200605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&PodSandboxMetadata{Name:kube-proxy-8xrsf,Uid:f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869476228778709,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:44:35.883193221Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ffa19181-f180-401c-a7e2-6e0a79bf07c4,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1709869476217875668,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T03:44:35.883202663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&PodSandboxMetadata{Name:etcd-multinode-959285,Uid:1f5416aad369f6cddede6bd4ab947efa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869471560200287,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.174:2379,kubernetes.io/config.hash: 1f5416aad369f6cddede6bd4ab947efa,kubernetes.io/config.seen: 2024-03-08T03:44:30.870687916Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Meta
data:&PodSandboxMetadata{Name:kube-apiserver-multinode-959285,Uid:4232c0eeca9b9eb59847e7cf0198d079,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869471555989364,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.174:8443,kubernetes.io/config.hash: 4232c0eeca9b9eb59847e7cf0198d079,kubernetes.io/config.seen: 2024-03-08T03:44:30.870688903Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-959285,Uid:1ad688533e699094d997283fbe8a1b36,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869471545905150,Labels:map[string]
string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1ad688533e699094d997283fbe8a1b36,kubernetes.io/config.seen: 2024-03-08T03:44:30.870683407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-959285,Uid:df2c7c193d0891f806d896d9937dca89,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1709869471527756668,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: df2c7c193d0891f806d896d9937dca89,kubernetes.io/config.seen: 2024-03-08T03:44:30.870686964Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-g8bd8,Uid:ec69a733-194a-42ee-b0c1-874ad9669205,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869160290523897,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:39:19.983081939Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ffa19181-f180-401c-a7e2-6e0a79bf07c4,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869113752143236,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\
"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T03:38:33.426666729Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-p62xk,Uid:f755d957-2474-40b4-a0cd-2a17b2cee46d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869113732565306,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:38:33.417617797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&PodSandboxMetadata{Name:kube-proxy-8xrsf,Uid:f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,Namespace:
kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869109781804350,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:38:29.469931438Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&PodSandboxMetadata{Name:kindnet-bhngm,Uid:1af93132-b76b-490c-8e4f-f9b2254b6591,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869109757606258,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,k8s-app: kindnet
,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T03:38:29.448756781Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-959285,Uid:df2c7c193d0891f806d896d9937dca89,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869090663523966,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: df2c7c193d0891f806d896d9937dca89,kubernetes.io/config.seen: 2024-03-08T03:38:10.185742803Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&PodSandboxMetadata{N
ame:kube-apiserver-multinode-959285,Uid:4232c0eeca9b9eb59847e7cf0198d079,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869090653221198,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.174:8443,kubernetes.io/config.hash: 4232c0eeca9b9eb59847e7cf0198d079,kubernetes.io/config.seen: 2024-03-08T03:38:10.185747654Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-959285,Uid:1ad688533e699094d997283fbe8a1b36,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869090652393493,Labels:map[string]string{component: ku
be-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1ad688533e699094d997283fbe8a1b36,kubernetes.io/config.seen: 2024-03-08T03:38:10.185748638Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-959285,Uid:1f5416aad369f6cddede6bd4ab947efa,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1709869090641659727,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: ht
tps://192.168.39.174:2379,kubernetes.io/config.hash: 1f5416aad369f6cddede6bd4ab947efa,kubernetes.io/config.seen: 2024-03-08T03:38:10.185746370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0d0d1c3b-d4ba-4e44-9900-58c3c06d5fc2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.665992189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d5c1c7c-94d0-4f74-81cb-636b4c6e21d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.666081949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d5c1c7c-94d0-4f74-81cb-636b4c6e21d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.667216970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d5c1c7c-94d0-4f74-81cb-636b4c6e21d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.693117858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=821f2810-0602-42f3-b97f-43cfeeab93ac name=/runtime.v1.RuntimeService/Version
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.693178209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=821f2810-0602-42f3-b97f-43cfeeab93ac name=/runtime.v1.RuntimeService/Version
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.694614531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9488a906-9b67-4391-8e21-c1ee8b5762f0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.694998277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709869698694977943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9488a906-9b67-4391-8e21-c1ee8b5762f0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.695666534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2530225c-c4da-4810-bfe6-32dc26b2f995 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.695714093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2530225c-c4da-4810-bfe6-32dc26b2f995 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:48:18 multinode-959285 crio[2846]: time="2024-03-08 03:48:18.696018937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d17683861634100422eb3fab80f67a1d5fd2aa6e74ef319e9a9c0090702724,PodSandboxId:db73c0c33ed0e03288e122cb4c72e89a5ec8e90d932b53023be6291d4a32a261,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709869510192112770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb,PodSandboxId:3425d248dd6e9acba690b349281e75b95c2c6fe61f832431179e148d15da2f73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709869476786214313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1,PodSandboxId:05025c1e185f28924bd055729f1f0c8257ad5e35a9e7fdd46b3ca0fd62c5cfc7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709869476706859838,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2
254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37,PodSandboxId:7a49b5a4798ee52ff8dfab5cc9c1160b2f863f8db1c087a6bb1f852d46320a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709869476580440696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string
]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1d633fccfd8641231c38b231429171346c5031db9a0915ddfa1f9719b7bb3be,PodSandboxId:fe9724edf637de3e6bce092c7a2e0e625c062a4a44c6e1e9472702a0ba0ab1a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709869476470556510,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d,PodSandboxId:e62d5ada13500774798273d73784bbe5375d31281a8d6f8956bca409a7b62e9a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709869471886987238,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343,PodSandboxId:8fca1c8789132e5a79f8d739c3f86ca86f003853c3f7f8287f62811e266695aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709869471875373672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68,PodSandboxId:cb7af5e69e23a8bdca6ff75ed7b5c522bc6a98fd900720a148d19ac846978947,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709869471846489437,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3,PodSandboxId:3306f288fc7643acfbcdbd812cfc550aec3d8cb2dd633c726d7532b71116fab7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709869471730608869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16e4f0321c53652f99c27327cc4a79667fb54b6f64e682e065166a987967760,PodSandboxId:6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1709869161455135507,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-g8bd8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec69a733-194a-42ee-b0c1-874ad9669205,},Annotations:map[string]string{io.kubernetes.container.hash: 3978337a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0,PodSandboxId:56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709869113941225775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p62xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755d957-2474-40b4-a0cd-2a17b2cee46d,},Annotations:map[string]string{io.kubernetes.container.hash: 2ac6d25c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f68a6db083c5f2e233327cdf3c447d7c3efb521c24079a2ddbcd54a6affb1134,PodSandboxId:6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709869113888712162,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: ffa19181-f180-401c-a7e2-6e0a79bf07c4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c0cc70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce,PodSandboxId:49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1709869112423695449,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bhngm,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1af93132-b76b-490c-8e4f-f9b2254b6591,},Annotations:map[string]string{io.kubernetes.container.hash: b7313313,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6,PodSandboxId:2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709869109977942365,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrsf,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: f5e09ab1-b468-4143-a1ed-7b967a5c6e4c,},Annotations:map[string]string{io.kubernetes.container.hash: 821ca97b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970,PodSandboxId:ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709869090924351145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: df2c7c193d0891f806d896d9937dca89,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c,PodSandboxId:009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709869090902609593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 1ad688533e699094d997283fbe8a1b36,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa,PodSandboxId:679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709869090857930283,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
232c0eeca9b9eb59847e7cf0198d079,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619,PodSandboxId:a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709869090856666544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-959285,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f5416aad369f6cddede6bd4ab947efa,},Annotations
:map[string]string{io.kubernetes.container.hash: b610a27f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2530225c-c4da-4810-bfe6-32dc26b2f995 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02d1768386163       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   db73c0c33ed0e       busybox-5b5d89c9d6-g8bd8
	e2ebb36caea2e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   3425d248dd6e9       coredns-5dd5756b68-p62xk
	e89d47e2ebb7d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   05025c1e185f2       kindnet-bhngm
	711f3f6d65ab3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   7a49b5a4798ee       kube-proxy-8xrsf
	c1d633fccfd86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   fe9724edf637d       storage-provisioner
	773e5a361b281       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   e62d5ada13500       etcd-multinode-959285
	6986001b9ff7b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   8fca1c8789132       kube-scheduler-multinode-959285
	fc0ed8400df6e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   cb7af5e69e23a       kube-controller-manager-multinode-959285
	04d727246d46d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   3306f288fc764       kube-apiserver-multinode-959285
	b16e4f0321c53       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   6fe4a93ab82e5       busybox-5b5d89c9d6-g8bd8
	17cc3da4fab78       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   56de50ef38281       coredns-5dd5756b68-p62xk
	f68a6db083c5f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   6e04f01517180       storage-provisioner
	730a5b93ab6ff       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   49403196125f0       kindnet-bhngm
	875a418eed9d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   2a17cde2c1af7       kube-proxy-8xrsf
	92713bc5e22dd       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   ccbccd91888ca       kube-scheduler-multinode-959285
	d029bc95c326b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   009413488c812       kube-controller-manager-multinode-959285
	b6da8191bde78       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   679735dda3432       kube-apiserver-multinode-959285
	dde66ebafc3a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   a76003d8ad50d       etcd-multinode-959285
	
	
	==> coredns [17cc3da4fab7826eabffad05542dc15f4db5ceb0fd95c5406f0125ff0d9631d0] <==
	[INFO] 10.244.1.2:57458 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001548452s
	[INFO] 10.244.1.2:56901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123765s
	[INFO] 10.244.1.2:42436 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102737s
	[INFO] 10.244.1.2:46569 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001071597s
	[INFO] 10.244.1.2:36601 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098038s
	[INFO] 10.244.1.2:52737 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102611s
	[INFO] 10.244.1.2:49515 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117871s
	[INFO] 10.244.0.3:41977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111507s
	[INFO] 10.244.0.3:34458 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077889s
	[INFO] 10.244.0.3:37732 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160261s
	[INFO] 10.244.0.3:45749 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011699s
	[INFO] 10.244.1.2:57295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132029s
	[INFO] 10.244.1.2:46741 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160195s
	[INFO] 10.244.1.2:49446 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081183s
	[INFO] 10.244.1.2:45135 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156419s
	[INFO] 10.244.0.3:36952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108031s
	[INFO] 10.244.0.3:51440 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159041s
	[INFO] 10.244.0.3:35793 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095292s
	[INFO] 10.244.0.3:32780 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120001s
	[INFO] 10.244.1.2:55848 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153431s
	[INFO] 10.244.1.2:45603 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124773s
	[INFO] 10.244.1.2:41815 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142748s
	[INFO] 10.244.1.2:58363 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012177s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2ebb36caea2e323e8cad1f00c0115b8af39be7069092317f5fc5e7afe48d3eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46244 - 2113 "HINFO IN 2316823638841521581.2085409816769772346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0107719s
	
	
	==> describe nodes <==
	Name:               multinode-959285
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959285
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=multinode-959285
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_38_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:38:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959285
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:48:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:44:35 +0000   Fri, 08 Mar 2024 03:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    multinode-959285
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 718ff2e15eda48259630038532e2e785
	  System UUID:                718ff2e1-5eda-4825-9630-038532e2e785
	  Boot ID:                    c0ccbce6-e354-4420-9ba8-b8aac7c8ade4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-g8bd8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  kube-system                 coredns-5dd5756b68-p62xk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m50s
	  kube-system                 etcd-multinode-959285                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-bhngm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m50s
	  kube-system                 kube-apiserver-multinode-959285             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-959285    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-8xrsf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-scheduler-multinode-959285             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m48s                  kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-959285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-959285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-959285 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m50s                  node-controller  Node multinode-959285 event: Registered Node multinode-959285 in Controller
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-959285 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m49s)  kubelet          Node multinode-959285 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m49s)  kubelet          Node multinode-959285 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m49s)  kubelet          Node multinode-959285 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-959285 event: Registered Node multinode-959285 in Controller
	
	
	Name:               multinode-959285-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-959285-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=multinode-959285
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T03_45_17_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:45:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-959285-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:45:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:46:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:46:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:46:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 08 Mar 2024 03:45:47 +0000   Fri, 08 Mar 2024 03:46:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    multinode-959285-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7310cc09066c4521b42a476c8dc18cee
	  System UUID:                7310cc09-066c-4521-b42a-476c8dc18cee
	  Boot ID:                    75d8bbea-6d94-42b4-bada-8cc518a107d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-rrf76    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-97wl4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m8s
	  kube-system                 kube-proxy-vsgll            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 9m4s                  kube-proxy       
	  Normal  Starting                 2m59s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  9m8s (x5 over 9m10s)  kubelet          Node multinode-959285-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m8s (x5 over 9m10s)  kubelet          Node multinode-959285-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m8s (x5 over 9m10s)  kubelet          Node multinode-959285-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m2s                  kubelet          Node multinode-959285-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m2s (x5 over 3m4s)   kubelet          Node multinode-959285-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x5 over 3m4s)   kubelet          Node multinode-959285-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x5 over 3m4s)   kubelet          Node multinode-959285-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m57s                 kubelet          Node multinode-959285-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                  node-controller  Node multinode-959285-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062764] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.175883] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.143623] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269744] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Mar 8 03:38] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.060640] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.878074] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +1.411474] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.376047] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.079954] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.642629] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.195499] systemd-fstab-generator[1515]: Ignoring "noauto" option for root device
	[Mar 8 03:39] kauditd_printk_skb: 82 callbacks suppressed
	[Mar 8 03:44] systemd-fstab-generator[2767]: Ignoring "noauto" option for root device
	[  +0.147894] systemd-fstab-generator[2779]: Ignoring "noauto" option for root device
	[  +0.178341] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.152148] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.269491] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[ +10.267011] systemd-fstab-generator[2930]: Ignoring "noauto" option for root device
	[  +0.087807] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.853925] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.742604] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.562792] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.005031] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[Mar 8 03:45] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [773e5a361b281f72a25a80f3e161496a176d6e582b9498adade6e58c402b4d1d] <==
	{"level":"info","ts":"2024-03-08T03:44:32.576687Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T03:44:32.576816Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-08T03:44:32.577243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2024-03-08T03:44:32.579431Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-03-08T03:44:32.579581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:44:32.579631Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:44:32.599985Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T03:44:32.600338Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T03:44:32.600395Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:44:32.600566Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:44:32.600599Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:44:33.825765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.82583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.825847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-03-08T03:44:33.825858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.825907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-03-08T03:44:33.831666Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-959285 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:44:33.831833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:44:33.833206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:44:33.838391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:44:33.839566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-03-08T03:44:33.843335Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:44:33.843374Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [dde66ebafc3a1c93f86e551341eff13a031e5560d1da012d845fe1df8c8a2619] <==
	{"level":"info","ts":"2024-03-08T03:38:11.829571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T03:38:11.829596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-03-08T03:38:11.834521Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-959285 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:38:11.834715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:38:11.835734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-03-08T03:38:11.838395Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.838565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:38:11.83947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T03:38:11.84435Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:38:11.844396Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T03:38:11.84469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.844846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:38:11.847322Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:39:20.569585Z","caller":"traceutil/trace.go:171","msg":"trace[1456353751] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"139.47395ms","start":"2024-03-08T03:39:20.430069Z","end":"2024-03-08T03:39:20.569543Z","steps":["trace[1456353751] 'process raft request'  (duration: 139.305851ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:40:38.264721Z","caller":"traceutil/trace.go:171","msg":"trace[1413497469] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"182.688088ms","start":"2024-03-08T03:40:38.082004Z","end":"2024-03-08T03:40:38.264692Z","steps":["trace[1413497469] 'process raft request'  (duration: 182.348657ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T03:42:46.32746Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-08T03:42:46.327806Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-959285","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"]}
	{"level":"warn","ts":"2024-03-08T03:42:46.327985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.328101Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.414187Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.174:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T03:42:46.414303Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.174:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T03:42:46.414367Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"72f328261b8d7407","current-leader-member-id":"72f328261b8d7407"}
	{"level":"info","ts":"2024-03-08T03:42:46.416503Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.41663Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-03-08T03:42:46.416639Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-959285","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"]}
	
	
	==> kernel <==
	 03:48:19 up 10 min,  0 users,  load average: 0.39, 0.25, 0.14
	Linux multinode-959285 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [730a5b93ab6fffcd19c173e0c62cbd4e8ce0d19729427ae1c935a2f9fd4c41ce] <==
	I0308 03:42:03.460159       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:13.466173       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:13.466228       1 main.go:227] handling current node
	I0308 03:42:13.466249       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:13.466308       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:13.466448       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:13.466481       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:23.473611       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:23.473665       1 main.go:227] handling current node
	I0308 03:42:23.473675       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:23.473681       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:23.473813       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:23.473897       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:33.482224       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:33.482344       1 main.go:227] handling current node
	I0308 03:42:33.482364       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:33.482381       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:33.482515       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:33.482555       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	I0308 03:42:43.487642       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:42:43.487745       1 main.go:227] handling current node
	I0308 03:42:43.487778       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:42:43.487798       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:42:43.487932       1 main.go:223] Handling node with IPs: map[192.168.39.175:{}]
	I0308 03:42:43.487954       1 main.go:250] Node multinode-959285-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e89d47e2ebb7dd40fd8ebe76329f3d38cc70c25070f87c4db701aea00cb18de1] <==
	I0308 03:47:17.834528       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:47:27.840138       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:47:27.840207       1 main.go:227] handling current node
	I0308 03:47:27.840226       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:47:27.840232       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:47:37.845960       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:47:37.846030       1 main.go:227] handling current node
	I0308 03:47:37.846048       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:47:37.846055       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:47:47.853354       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:47:47.853397       1 main.go:227] handling current node
	I0308 03:47:47.853407       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:47:47.853413       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:47:57.864907       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:47:57.865011       1 main.go:227] handling current node
	I0308 03:47:57.865042       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:47:57.865061       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:48:07.878993       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:48:07.879048       1 main.go:227] handling current node
	I0308 03:48:07.879067       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:48:07.879074       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	I0308 03:48:17.895768       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0308 03:48:17.895842       1 main.go:227] handling current node
	I0308 03:48:17.895862       1 main.go:223] Handling node with IPs: map[192.168.39.18:{}]
	I0308 03:48:17.895872       1 main.go:250] Node multinode-959285-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [04d727246d46d2a86286be0fbd3963f990c0d26a5810a4905bde24e1a8fdaca3] <==
	I0308 03:44:35.326499       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 03:44:35.343361       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 03:44:35.343457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 03:44:35.418089       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 03:44:35.418350       1 aggregator.go:166] initial CRD sync complete...
	I0308 03:44:35.418407       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 03:44:35.418431       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 03:44:35.430526       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:44:35.505093       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 03:44:35.510673       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:44:35.511406       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 03:44:35.511446       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 03:44:35.511937       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 03:44:35.514158       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 03:44:35.516892       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 03:44:35.518582       1 cache.go:39] Caches are synced for autoregister controller
	E0308 03:44:35.522555       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0308 03:44:36.365205       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 03:44:37.934969       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 03:44:38.056236       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 03:44:38.067920       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 03:44:38.162583       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 03:44:38.178171       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 03:44:47.842525       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 03:44:47.901130       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b6da8191bde78b4a2e28ca6f0ec180b2edc014eb568894cecf0af766e027b6fa] <==
	I0308 03:42:46.369643       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0308 03:42:46.369656       1 establishing_controller.go:87] Shutting down EstablishingController
	I0308 03:42:46.369706       1 naming_controller.go:302] Shutting down NamingConditionController
	I0308 03:42:46.369728       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0308 03:42:46.369780       1 available_controller.go:439] Shutting down AvailableConditionController
	I0308 03:42:46.369845       1 controller.go:129] Ending legacy_token_tracking_controller
	I0308 03:42:46.369879       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0308 03:42:46.369931       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0308 03:42:46.369977       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 03:42:46.370000       1 autoregister_controller.go:165] Shutting down autoregister controller
	W0308 03:42:46.370099       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 03:42:46.370156       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0308 03:42:46.370181       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0308 03:42:46.370581       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0308 03:42:46.370653       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	W0308 03:42:46.370863       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 03:42:46.370954       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0308 03:42:46.371205       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.371962       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372068       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372418       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372492       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372553       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372605       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 03:42:46.372657       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d029bc95c326ba8973c32cebd16c3da0685edd904ebf950231c10c7d2f1e703c] <==
	I0308 03:39:53.911010       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:39:53.913736       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:39:53.926614       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.2.0/24"]
	I0308 03:39:53.950400       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6k8t9"
	I0308 03:39:53.950592       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jtsrw"
	I0308 03:39:54.036356       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-959285-m03"
	I0308 03:39:54.036854       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:40:00.712630       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:31.899873       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:34.061893       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-959285-m03 event: Removing Node multinode-959285-m03 from Controller"
	I0308 03:40:34.400347       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:40:34.400817       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:40:34.423494       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.3.0/24"]
	I0308 03:40:39.062945       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:40:39.692689       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:41:24.096395       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:41:24.096734       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-959285-m03 status is now: NodeNotReady"
	I0308 03:41:24.106452       1 event.go:307] "Event occurred" object="multinode-959285-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-959285-m02 status is now: NodeNotReady"
	I0308 03:41:24.113198       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-6k8t9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.130921       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vsgll" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.141577       1 event.go:307] "Event occurred" object="kube-system/kindnet-jtsrw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.158465       1 event.go:307] "Event occurred" object="kube-system/kindnet-97wl4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.172640       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-mmt2r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:41:24.182081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.312754ms"
	I0308 03:41:24.182178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.604µs"
	
	
	==> kube-controller-manager [fc0ed8400df6eb50bad70f71aa1d4ec7123f567a10a12fb03c14643b06b5cf68] <==
	I0308 03:45:22.650129       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:22.676617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.294µs"
	I0308 03:45:22.697754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.369µs"
	I0308 03:45:22.955940       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-rrf76" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-rrf76"
	I0308 03:45:24.854936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.719778ms"
	I0308 03:45:24.855163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.195µs"
	I0308 03:45:42.676565       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:42.959461       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-959285-m03 event: Removing Node multinode-959285-m03 from Controller"
	I0308 03:45:45.309465       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-959285-m03\" does not exist"
	I0308 03:45:45.312505       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:45.323699       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-959285-m03" podCIDRs=["10.244.2.0/24"]
	I0308 03:45:47.960376       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-959285-m03 event: Registered Node multinode-959285-m03 in Controller"
	I0308 03:45:51.400614       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:57.378359       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-959285-m02"
	I0308 03:45:57.973902       1 event.go:307] "Event occurred" object="multinode-959285-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-959285-m03 event: Removing Node multinode-959285-m03 from Controller"
	I0308 03:46:37.994077       1 event.go:307] "Event occurred" object="multinode-959285-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-959285-m02 status is now: NodeNotReady"
	I0308 03:46:38.013046       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vsgll" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:46:38.021003       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-rrf76" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:46:38.040483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="20.651016ms"
	I0308 03:46:38.041467       1 event.go:307] "Event occurred" object="kube-system/kindnet-97wl4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 03:46:38.042192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="24.049µs"
	I0308 03:46:47.833861       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-6k8t9"
	I0308 03:46:47.858947       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-6k8t9"
	I0308 03:46:47.859146       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-jtsrw"
	I0308 03:46:47.885904       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-jtsrw"
	
	
	==> kube-proxy [711f3f6d65ab34dbe4b131ea73ba524be30631a9d47fb2b4b919d4b3d3b8ef37] <==
	I0308 03:44:36.969819       1 server_others.go:69] "Using iptables proxy"
	I0308 03:44:36.990817       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0308 03:44:37.046696       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:44:37.046748       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:44:37.056475       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:44:37.056741       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:44:37.059170       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:44:37.059419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:44:37.067630       1 config.go:188] "Starting service config controller"
	I0308 03:44:37.067644       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:44:37.067664       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:44:37.067668       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:44:37.067921       1 config.go:315] "Starting node config controller"
	I0308 03:44:37.067927       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:44:37.168549       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:44:37.168574       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:44:37.168595       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [875a418eed9d27d2416fe91ea8e32c2f4b4719015cc404b84f1f99a863718fb6] <==
	I0308 03:38:30.340335       1 server_others.go:69] "Using iptables proxy"
	I0308 03:38:30.356554       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0308 03:38:30.538113       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 03:38:30.538159       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 03:38:30.542183       1 server_others.go:152] "Using iptables Proxier"
	I0308 03:38:30.542315       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 03:38:30.542645       1 server.go:846] "Version info" version="v1.28.4"
	I0308 03:38:30.542678       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:38:30.543935       1 config.go:188] "Starting service config controller"
	I0308 03:38:30.543985       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 03:38:30.544003       1 config.go:97] "Starting endpoint slice config controller"
	I0308 03:38:30.544006       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 03:38:30.546025       1 config.go:315] "Starting node config controller"
	I0308 03:38:30.546067       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 03:38:30.646358       1 shared_informer.go:318] Caches are synced for node config
	I0308 03:38:30.646390       1 shared_informer.go:318] Caches are synced for service config
	I0308 03:38:30.646411       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6986001b9ff7b15a693c90a3ddf3792f3df707bd8b3fc345bf6bd7abb2e83343] <==
	I0308 03:44:33.109533       1 serving.go:348] Generated self-signed cert in-memory
	W0308 03:44:35.360709       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 03:44:35.360766       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:44:35.360777       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 03:44:35.360784       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 03:44:35.434964       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 03:44:35.435016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:44:35.438834       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 03:44:35.438892       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:44:35.443043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 03:44:35.443133       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:44:35.539713       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [92713bc5e22ddb4b8f5b217a99849f70b31595cf033957ad0103714872851970] <==
	W0308 03:38:14.828787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 03:38:14.828857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 03:38:14.914038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 03:38:14.914160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 03:38:14.930398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 03:38:14.930517       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 03:38:14.945794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 03:38:14.946974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 03:38:14.968045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:14.968141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:14.997049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 03:38:14.997132       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 03:38:15.079116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.079237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.100994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.101471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.108733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 03:38:15.108873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0308 03:38:15.197359       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 03:38:15.197529       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0308 03:38:17.429188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:42:46.349917       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 03:42:46.352848       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 03:42:46.353211       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0308 03:42:46.353595       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 08 03:46:30 multinode-959285 kubelet[3062]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:46:30 multinode-959285 kubelet[3062]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.008875    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podec69a733-194a-42ee-b0c1-874ad9669205/crio-6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Error finding container 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Status 404 returned error can't find the container with id 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.009231    3062 manager.go:1106] Failed to create existing container: /kubepods/pod1af93132-b76b-490c-8e4f-f9b2254b6591/crio-49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Error finding container 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Status 404 returned error can't find the container with id 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.009505    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podffa19181-f180-401c-a7e2-6e0a79bf07c4/crio-6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Error finding container 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Status 404 returned error can't find the container with id 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.009796    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1ad688533e699094d997283fbe8a1b36/crio-009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Error finding container 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Status 404 returned error can't find the container with id 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.010079    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf755d957-2474-40b4-a0cd-2a17b2cee46d/crio-56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Error finding container 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Status 404 returned error can't find the container with id 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.010435    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod4232c0eeca9b9eb59847e7cf0198d079/crio-679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Error finding container 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Status 404 returned error can't find the container with id 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.010811    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1f5416aad369f6cddede6bd4ab947efa/crio-a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Error finding container a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Status 404 returned error can't find the container with id a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.011030    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/poddf2c7c193d0891f806d896d9937dca89/crio-ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Error finding container ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Status 404 returned error can't find the container with id ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481
	Mar 08 03:46:31 multinode-959285 kubelet[3062]: E0308 03:46:31.011312    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf5e09ab1-b468-4143-a1ed-7b967a5c6e4c/crio-2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Error finding container 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Status 404 returned error can't find the container with id 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7
	Mar 08 03:47:30 multinode-959285 kubelet[3062]: E0308 03:47:30.928667    3062 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 03:47:30 multinode-959285 kubelet[3062]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 03:47:30 multinode-959285 kubelet[3062]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 03:47:30 multinode-959285 kubelet[3062]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 03:47:30 multinode-959285 kubelet[3062]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.009056    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod4232c0eeca9b9eb59847e7cf0198d079/crio-679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Error finding container 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838: Status 404 returned error can't find the container with id 679735dda343209372696c6e2e4988d35c7c5f8586926cac6d699f6c3edd4838
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.009606    3062 manager.go:1106] Failed to create existing container: /kubepods/pod1af93132-b76b-490c-8e4f-f9b2254b6591/crio-49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Error finding container 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407: Status 404 returned error can't find the container with id 49403196125f09aa79b343db150e9ed93ab1d6879a51abf8ec7a58911aba8407
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.010040    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1ad688533e699094d997283fbe8a1b36/crio-009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Error finding container 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858: Status 404 returned error can't find the container with id 009413488c812f2f1535254cda679686ca646ac82e05f07ccd4bf1771c708858
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.010518    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf5e09ab1-b468-4143-a1ed-7b967a5c6e4c/crio-2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Error finding container 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7: Status 404 returned error can't find the container with id 2a17cde2c1af74b7ccb6a3771d8ff6ca895374881499a27d183a65dfa76874f7
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.010835    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podffa19181-f180-401c-a7e2-6e0a79bf07c4/crio-6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Error finding container 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e: Status 404 returned error can't find the container with id 6e04f0151718007a48f579c8b2e5d0128654c5d5d388be526e9f8db0588e938e
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.011341    3062 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podec69a733-194a-42ee-b0c1-874ad9669205/crio-6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Error finding container 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b: Status 404 returned error can't find the container with id 6fe4a93ab82e52146bd4965329ccae4fbdf4c1d0df10f5e7bd2fdce65343226b
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.011735    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/poddf2c7c193d0891f806d896d9937dca89/crio-ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Error finding container ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481: Status 404 returned error can't find the container with id ccbccd91888ca3b134e440227d37ec3ddd7066b5a8a0c2f661d06fdb46fef481
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.012125    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf755d957-2474-40b4-a0cd-2a17b2cee46d/crio-56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Error finding container 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1: Status 404 returned error can't find the container with id 56de50ef38281921d96cab947f0c379e722cfd71e1aaff3b22cceeca20d739b1
	Mar 08 03:47:31 multinode-959285 kubelet[3062]: E0308 03:47:31.012592    3062 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod1f5416aad369f6cddede6bd4ab947efa/crio-a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Error finding container a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0: Status 404 returned error can't find the container with id a76003d8ad50df5508a97a630d84d47a7b415d2a46c7fb94a55d1e4dc149a3f0
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 03:48:18.237041  945610 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18333-911675/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-959285 -n multinode-959285
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-959285 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.43s)

                                                
                                    
x
+
TestPreload (181.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-001336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0308 03:52:52.008929  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:53:32.257183  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-001336 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m47.608667407s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-001336 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-001336 image pull gcr.io/k8s-minikube/busybox: (1.073175114s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-001336
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-001336: (7.307550184s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-001336 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-001336 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.840609232s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-001336 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-03-08 03:55:25.807590804 +0000 UTC m=+3578.846499833
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-001336 -n test-preload-001336
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-001336 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-001336 logs -n 25: (1.168556161s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285 sudo cat                                       | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt                       | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m02:/home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n                                                                 | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | multinode-959285-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-959285 ssh -n multinode-959285-m02 sudo cat                                   | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | /home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-959285 node stop m03                                                          | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	| node    | multinode-959285 node start                                                             | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC | 08 Mar 24 03:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| stop    | -p multinode-959285                                                                     | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:40 UTC |                     |
	| start   | -p multinode-959285                                                                     | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:42 UTC | 08 Mar 24 03:45 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC |                     |
	| node    | multinode-959285 node delete                                                            | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC | 08 Mar 24 03:45 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-959285 stop                                                                   | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:45 UTC |                     |
	| start   | -p multinode-959285                                                                     | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:48 UTC | 08 Mar 24 03:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-959285                                                                | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:51 UTC |                     |
	| start   | -p multinode-959285-m02                                                                 | multinode-959285-m02 | jenkins | v1.32.0 | 08 Mar 24 03:51 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-959285-m03                                                                 | multinode-959285-m03 | jenkins | v1.32.0 | 08 Mar 24 03:51 UTC | 08 Mar 24 03:52 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-959285                                                                 | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:52 UTC |                     |
	| delete  | -p multinode-959285-m03                                                                 | multinode-959285-m03 | jenkins | v1.32.0 | 08 Mar 24 03:52 UTC | 08 Mar 24 03:52 UTC |
	| delete  | -p multinode-959285                                                                     | multinode-959285     | jenkins | v1.32.0 | 08 Mar 24 03:52 UTC | 08 Mar 24 03:52 UTC |
	| start   | -p test-preload-001336                                                                  | test-preload-001336  | jenkins | v1.32.0 | 08 Mar 24 03:52 UTC | 08 Mar 24 03:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-001336 image pull                                                          | test-preload-001336  | jenkins | v1.32.0 | 08 Mar 24 03:54 UTC | 08 Mar 24 03:54 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-001336                                                                  | test-preload-001336  | jenkins | v1.32.0 | 08 Mar 24 03:54 UTC | 08 Mar 24 03:54 UTC |
	| start   | -p test-preload-001336                                                                  | test-preload-001336  | jenkins | v1.32.0 | 08 Mar 24 03:54 UTC | 08 Mar 24 03:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-001336 image list                                                          | test-preload-001336  | jenkins | v1.32.0 | 08 Mar 24 03:55 UTC | 08 Mar 24 03:55 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 03:54:23
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 03:54:23.780096  947676 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:54:23.780355  947676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:54:23.780366  947676 out.go:304] Setting ErrFile to fd 2...
	I0308 03:54:23.780371  947676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:54:23.780552  947676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:54:23.781103  947676 out.go:298] Setting JSON to false
	I0308 03:54:23.782062  947676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27390,"bootTime":1709842674,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:54:23.782158  947676 start.go:139] virtualization: kvm guest
	I0308 03:54:23.785491  947676 out.go:177] * [test-preload-001336] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:54:23.787252  947676 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:54:23.787269  947676 notify.go:220] Checking for updates...
	I0308 03:54:23.790025  947676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:54:23.791309  947676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:54:23.792526  947676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:54:23.793833  947676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:54:23.795024  947676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:54:23.796521  947676 config.go:182] Loaded profile config "test-preload-001336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0308 03:54:23.796910  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:54:23.796954  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:54:23.811441  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0308 03:54:23.811916  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:54:23.812529  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:54:23.812554  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:54:23.812950  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:54:23.813155  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:23.814975  947676 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 03:54:23.816300  947676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:54:23.816591  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:54:23.816624  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:54:23.831570  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0308 03:54:23.831930  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:54:23.832414  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:54:23.832459  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:54:23.832799  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:54:23.832989  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:23.865390  947676 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:54:23.866658  947676 start.go:297] selected driver: kvm2
	I0308 03:54:23.866668  947676 start.go:901] validating driver "kvm2" against &{Name:test-preload-001336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-001336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:54:23.866754  947676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:54:23.867419  947676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:54:23.867485  947676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:54:23.882168  947676 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:54:23.882481  947676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:54:23.882557  947676 cni.go:84] Creating CNI manager for ""
	I0308 03:54:23.882572  947676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 03:54:23.882623  947676 start.go:340] cluster config:
	{Name:test-preload-001336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-001336 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:54:23.882712  947676 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:54:23.885031  947676 out.go:177] * Starting "test-preload-001336" primary control-plane node in "test-preload-001336" cluster
	I0308 03:54:23.886302  947676 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0308 03:54:23.906550  947676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:54:23.906573  947676 cache.go:56] Caching tarball of preloaded images
	I0308 03:54:23.906683  947676 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0308 03:54:23.908197  947676 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0308 03:54:23.909394  947676 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0308 03:54:23.937431  947676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0308 03:54:26.947338  947676 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0308 03:54:26.947430  947676 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0308 03:54:27.808789  947676 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0308 03:54:27.808916  947676 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/config.json ...
	I0308 03:54:27.809146  947676 start.go:360] acquireMachinesLock for test-preload-001336: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:54:27.809212  947676 start.go:364] duration metric: took 44.588µs to acquireMachinesLock for "test-preload-001336"
	I0308 03:54:27.809228  947676 start.go:96] Skipping create...Using existing machine configuration
	I0308 03:54:27.809234  947676 fix.go:54] fixHost starting: 
	I0308 03:54:27.809574  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:54:27.809608  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:54:27.825677  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0308 03:54:27.826178  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:54:27.826694  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:54:27.826722  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:54:27.827086  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:54:27.827327  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:27.827488  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetState
	I0308 03:54:27.829146  947676 fix.go:112] recreateIfNeeded on test-preload-001336: state=Stopped err=<nil>
	I0308 03:54:27.829172  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	W0308 03:54:27.829359  947676 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 03:54:27.831546  947676 out.go:177] * Restarting existing kvm2 VM for "test-preload-001336" ...
	I0308 03:54:27.832727  947676 main.go:141] libmachine: (test-preload-001336) Calling .Start
	I0308 03:54:27.832902  947676 main.go:141] libmachine: (test-preload-001336) Ensuring networks are active...
	I0308 03:54:27.833685  947676 main.go:141] libmachine: (test-preload-001336) Ensuring network default is active
	I0308 03:54:27.834102  947676 main.go:141] libmachine: (test-preload-001336) Ensuring network mk-test-preload-001336 is active
	I0308 03:54:27.834497  947676 main.go:141] libmachine: (test-preload-001336) Getting domain xml...
	I0308 03:54:27.835444  947676 main.go:141] libmachine: (test-preload-001336) Creating domain...
	I0308 03:54:29.017371  947676 main.go:141] libmachine: (test-preload-001336) Waiting to get IP...
	I0308 03:54:29.018157  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:29.018523  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:29.018598  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:29.018500  947722 retry.go:31] will retry after 298.727478ms: waiting for machine to come up
	I0308 03:54:29.319157  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:29.319582  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:29.319650  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:29.319560  947722 retry.go:31] will retry after 342.570118ms: waiting for machine to come up
	I0308 03:54:29.664412  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:29.664848  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:29.664876  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:29.664797  947722 retry.go:31] will retry after 311.492959ms: waiting for machine to come up
	I0308 03:54:29.978390  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:29.978722  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:29.978757  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:29.978686  947722 retry.go:31] will retry after 495.880032ms: waiting for machine to come up
	I0308 03:54:30.476392  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:30.476744  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:30.476778  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:30.476689  947722 retry.go:31] will retry after 604.396617ms: waiting for machine to come up
	I0308 03:54:31.082353  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:31.082693  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:31.082721  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:31.082647  947722 retry.go:31] will retry after 942.753912ms: waiting for machine to come up
	I0308 03:54:32.026651  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:32.027041  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:32.027069  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:32.026994  947722 retry.go:31] will retry after 947.420507ms: waiting for machine to come up
	I0308 03:54:32.975459  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:32.975883  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:32.975913  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:32.975832  947722 retry.go:31] will retry after 945.583351ms: waiting for machine to come up
	I0308 03:54:33.923481  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:33.923922  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:33.923942  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:33.923879  947722 retry.go:31] will retry after 1.71932912s: waiting for machine to come up
	I0308 03:54:35.645843  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:35.646219  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:35.646242  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:35.646174  947722 retry.go:31] will retry after 1.625556227s: waiting for machine to come up
	I0308 03:54:37.274050  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:37.274421  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:37.274443  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:37.274395  947722 retry.go:31] will retry after 1.758796395s: waiting for machine to come up
	I0308 03:54:39.034764  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:39.035205  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:39.035235  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:39.035170  947722 retry.go:31] will retry after 3.554210456s: waiting for machine to come up
	I0308 03:54:42.593935  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:42.594336  947676 main.go:141] libmachine: (test-preload-001336) DBG | unable to find current IP address of domain test-preload-001336 in network mk-test-preload-001336
	I0308 03:54:42.594375  947676 main.go:141] libmachine: (test-preload-001336) DBG | I0308 03:54:42.594298  947722 retry.go:31] will retry after 3.295597985s: waiting for machine to come up
	I0308 03:54:45.894005  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:45.894449  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has current primary IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:45.894467  947676 main.go:141] libmachine: (test-preload-001336) Found IP for machine: 192.168.39.18
	I0308 03:54:45.894480  947676 main.go:141] libmachine: (test-preload-001336) Reserving static IP address...
	I0308 03:54:45.895002  947676 main.go:141] libmachine: (test-preload-001336) Reserved static IP address: 192.168.39.18
	I0308 03:54:45.895029  947676 main.go:141] libmachine: (test-preload-001336) Waiting for SSH to be available...
	I0308 03:54:45.895049  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "test-preload-001336", mac: "52:54:00:36:56:12", ip: "192.168.39.18"} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:45.895077  947676 main.go:141] libmachine: (test-preload-001336) DBG | skip adding static IP to network mk-test-preload-001336 - found existing host DHCP lease matching {name: "test-preload-001336", mac: "52:54:00:36:56:12", ip: "192.168.39.18"}
	I0308 03:54:45.895093  947676 main.go:141] libmachine: (test-preload-001336) DBG | Getting to WaitForSSH function...
	I0308 03:54:45.897538  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:45.897920  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:45.897946  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:45.898137  947676 main.go:141] libmachine: (test-preload-001336) DBG | Using SSH client type: external
	I0308 03:54:45.898192  947676 main.go:141] libmachine: (test-preload-001336) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa (-rw-------)
	I0308 03:54:45.898234  947676 main.go:141] libmachine: (test-preload-001336) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:54:45.898252  947676 main.go:141] libmachine: (test-preload-001336) DBG | About to run SSH command:
	I0308 03:54:45.898265  947676 main.go:141] libmachine: (test-preload-001336) DBG | exit 0
	I0308 03:54:46.025535  947676 main.go:141] libmachine: (test-preload-001336) DBG | SSH cmd err, output: <nil>: 
	I0308 03:54:46.025960  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetConfigRaw
	I0308 03:54:46.026665  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetIP
	I0308 03:54:46.029613  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.030062  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.030084  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.030438  947676 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/config.json ...
	I0308 03:54:46.030642  947676 machine.go:94] provisionDockerMachine start ...
	I0308 03:54:46.030672  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:46.030905  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.033207  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.033500  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.033525  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.033693  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.033877  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.034031  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.034153  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.034289  947676 main.go:141] libmachine: Using SSH client type: native
	I0308 03:54:46.034510  947676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0308 03:54:46.034526  947676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 03:54:46.145663  947676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 03:54:46.145696  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetMachineName
	I0308 03:54:46.145962  947676 buildroot.go:166] provisioning hostname "test-preload-001336"
	I0308 03:54:46.145990  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetMachineName
	I0308 03:54:46.146222  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.149041  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.149442  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.149500  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.149628  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.149812  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.150037  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.150187  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.150342  947676 main.go:141] libmachine: Using SSH client type: native
	I0308 03:54:46.150524  947676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0308 03:54:46.150538  947676 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-001336 && echo "test-preload-001336" | sudo tee /etc/hostname
	I0308 03:54:46.277626  947676 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-001336
	
	I0308 03:54:46.277655  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.280490  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.280817  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.280858  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.281007  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.281192  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.281398  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.281544  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.281717  947676 main.go:141] libmachine: Using SSH client type: native
	I0308 03:54:46.281931  947676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0308 03:54:46.281951  947676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-001336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-001336/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-001336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:54:46.403653  947676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:54:46.403686  947676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:54:46.403725  947676 buildroot.go:174] setting up certificates
	I0308 03:54:46.403737  947676 provision.go:84] configureAuth start
	I0308 03:54:46.403746  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetMachineName
	I0308 03:54:46.404032  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetIP
	I0308 03:54:46.406435  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.406755  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.406787  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.406887  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.408960  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.409316  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.409343  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.409481  947676 provision.go:143] copyHostCerts
	I0308 03:54:46.409546  947676 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:54:46.409557  947676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:54:46.409620  947676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:54:46.409722  947676 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:54:46.409733  947676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:54:46.409758  947676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:54:46.409810  947676 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:54:46.409817  947676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:54:46.409837  947676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:54:46.409886  947676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.test-preload-001336 san=[127.0.0.1 192.168.39.18 localhost minikube test-preload-001336]
	I0308 03:54:46.495771  947676 provision.go:177] copyRemoteCerts
	I0308 03:54:46.495823  947676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:54:46.495846  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.498097  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.498385  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.498416  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.498602  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.498786  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.499005  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.499103  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:54:46.584381  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:54:46.610936  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 03:54:46.640982  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:54:46.666579  947676 provision.go:87] duration metric: took 262.830598ms to configureAuth
	I0308 03:54:46.666605  947676 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:54:46.666774  947676 config.go:182] Loaded profile config "test-preload-001336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0308 03:54:46.666848  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.669666  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.670039  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.670072  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.670239  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.670430  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.670611  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.670731  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.670886  947676 main.go:141] libmachine: Using SSH client type: native
	I0308 03:54:46.671087  947676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0308 03:54:46.671113  947676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:54:46.952374  947676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:54:46.952407  947676 machine.go:97] duration metric: took 921.748861ms to provisionDockerMachine
	I0308 03:54:46.952419  947676 start.go:293] postStartSetup for "test-preload-001336" (driver="kvm2")
	I0308 03:54:46.952430  947676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:54:46.952447  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:46.952771  947676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:54:46.952806  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:46.955378  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.955699  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:46.955725  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:46.955832  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:46.956017  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:46.956203  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:46.956332  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:54:47.040495  947676 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:54:47.045155  947676 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:54:47.045178  947676 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:54:47.045258  947676 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:54:47.045358  947676 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:54:47.045444  947676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:54:47.055084  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:54:47.081387  947676 start.go:296] duration metric: took 128.954686ms for postStartSetup
	I0308 03:54:47.081430  947676 fix.go:56] duration metric: took 19.272196266s for fixHost
	I0308 03:54:47.081452  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:47.084197  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.084569  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:47.084603  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.084730  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:47.084951  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:47.085144  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:47.085288  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:47.085453  947676 main.go:141] libmachine: Using SSH client type: native
	I0308 03:54:47.085655  947676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0308 03:54:47.085671  947676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 03:54:47.198335  947676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870087.167174934
	
	I0308 03:54:47.198359  947676 fix.go:216] guest clock: 1709870087.167174934
	I0308 03:54:47.198370  947676 fix.go:229] Guest: 2024-03-08 03:54:47.167174934 +0000 UTC Remote: 2024-03-08 03:54:47.081434845 +0000 UTC m=+23.348421101 (delta=85.740089ms)
	I0308 03:54:47.198401  947676 fix.go:200] guest clock delta is within tolerance: 85.740089ms
	I0308 03:54:47.198407  947676 start.go:83] releasing machines lock for "test-preload-001336", held for 19.389184813s
	I0308 03:54:47.198435  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:47.198764  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetIP
	I0308 03:54:47.201046  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.201415  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:47.201446  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.201645  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:47.202193  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:47.202401  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:54:47.202505  947676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:54:47.202545  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:47.202641  947676 ssh_runner.go:195] Run: cat /version.json
	I0308 03:54:47.202665  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:54:47.205091  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.205408  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:47.205436  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.205513  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.205608  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:47.205786  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:47.205961  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:47.205990  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:47.206018  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:47.206134  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:54:47.206186  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:54:47.206340  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:54:47.206513  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:54:47.206680  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:54:47.318503  947676 ssh_runner.go:195] Run: systemctl --version
	I0308 03:54:47.325019  947676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:54:47.469660  947676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:54:47.476899  947676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:54:47.476964  947676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:54:47.495907  947676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:54:47.495925  947676 start.go:494] detecting cgroup driver to use...
	I0308 03:54:47.495986  947676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:54:47.513746  947676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:54:47.529412  947676 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:54:47.529455  947676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:54:47.544731  947676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:54:47.559511  947676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:54:47.686175  947676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:54:47.843431  947676 docker.go:233] disabling docker service ...
	I0308 03:54:47.843505  947676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:54:47.859248  947676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:54:47.872358  947676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:54:48.012931  947676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:54:48.152591  947676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:54:48.167498  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:54:48.186864  947676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0308 03:54:48.186936  947676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:54:48.198714  947676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:54:48.198771  947676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:54:48.210456  947676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:54:48.222443  947676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:54:48.234241  947676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:54:48.246162  947676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:54:48.256853  947676 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:54:48.256899  947676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:54:48.271785  947676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:54:48.282677  947676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:54:48.423307  947676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:54:48.569885  947676 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:54:48.569968  947676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:54:48.575287  947676 start.go:562] Will wait 60s for crictl version
	I0308 03:54:48.575350  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:48.579519  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:54:48.617706  947676 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:54:48.617831  947676 ssh_runner.go:195] Run: crio --version
	I0308 03:54:48.647880  947676 ssh_runner.go:195] Run: crio --version
	I0308 03:54:48.681553  947676 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0308 03:54:48.683105  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetIP
	I0308 03:54:48.686190  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:48.686543  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:54:48.686578  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:54:48.686832  947676 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:54:48.691545  947676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:54:48.705639  947676 kubeadm.go:877] updating cluster {Name:test-preload-001336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-001336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:54:48.705748  947676 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0308 03:54:48.705792  947676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:54:48.753419  947676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0308 03:54:48.753496  947676 ssh_runner.go:195] Run: which lz4
	I0308 03:54:48.758582  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 03:54:48.763323  947676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 03:54:48.763344  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0308 03:54:50.567956  947676 crio.go:444] duration metric: took 1.809396927s to copy over tarball
	I0308 03:54:50.568047  947676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 03:54:53.231506  947676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.663417382s)
	I0308 03:54:53.231544  947676 crio.go:451] duration metric: took 2.663550757s to extract the tarball
	I0308 03:54:53.231556  947676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 03:54:53.274489  947676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:54:53.323824  947676 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0308 03:54:53.323862  947676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 03:54:53.323989  947676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:54:53.324014  947676 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0308 03:54:53.324032  947676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0308 03:54:53.324045  947676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0308 03:54:53.323993  947676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0308 03:54:53.324106  947676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0308 03:54:53.324023  947676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0308 03:54:53.323992  947676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0308 03:54:53.325523  947676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0308 03:54:53.325728  947676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0308 03:54:53.325743  947676 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0308 03:54:53.325744  947676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0308 03:54:53.325819  947676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0308 03:54:53.325894  947676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0308 03:54:53.325911  947676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0308 03:54:53.325939  947676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:54:53.463680  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0308 03:54:53.470967  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0308 03:54:53.472821  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0308 03:54:53.474177  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0308 03:54:53.496350  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0308 03:54:53.523093  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0308 03:54:53.532849  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0308 03:54:53.560915  947676 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0308 03:54:53.560967  947676 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0308 03:54:53.561027  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.615941  947676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0308 03:54:53.615978  947676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0308 03:54:53.616019  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.637414  947676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:54:53.651934  947676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0308 03:54:53.651951  947676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0308 03:54:53.651982  947676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0308 03:54:53.651990  947676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0308 03:54:53.652002  947676 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0308 03:54:53.652025  947676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0308 03:54:53.652042  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.652054  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.652044  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.714923  947676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0308 03:54:53.714975  947676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0308 03:54:53.715028  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.760956  947676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0308 03:54:53.761009  947676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0308 03:54:53.761061  947676 ssh_runner.go:195] Run: which crictl
	I0308 03:54:53.761064  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0308 03:54:53.761103  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0308 03:54:53.814476  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0308 03:54:53.814498  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0308 03:54:53.814551  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0308 03:54:53.814588  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0308 03:54:53.822804  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0308 03:54:53.822885  947676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0308 03:54:53.822908  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0308 03:54:53.867364  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0308 03:54:53.867364  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0308 03:54:53.993736  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0308 03:54:53.993833  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0308 03:54:53.993887  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0308 03:54:53.993927  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0308 03:54:53.993983  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0308 03:54:53.993999  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0308 03:54:53.994016  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0308 03:54:53.994020  947676 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0308 03:54:53.994053  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0308 03:54:53.994073  947676 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0308 03:54:53.994105  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0308 03:54:53.994106  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0308 03:54:53.994063  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0308 03:54:53.994060  947676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0308 03:54:54.004584  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0308 03:54:54.008151  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0308 03:54:54.010477  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0308 03:54:56.164357  947676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.170226345s)
	I0308 03:54:56.164385  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0308 03:54:56.164404  947676 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0308 03:54:56.164450  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0308 03:54:56.164472  947676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.170319548s)
	I0308 03:54:56.164513  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0308 03:54:56.164529  947676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.170403084s)
	I0308 03:54:56.164559  947676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0308 03:54:56.915261  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0308 03:54:56.915307  947676 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0308 03:54:56.915369  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0308 03:54:59.167759  947676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252362635s)
	I0308 03:54:59.167787  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0308 03:54:59.167811  947676 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0308 03:54:59.167856  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0308 03:54:59.623301  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0308 03:54:59.623350  947676 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0308 03:54:59.623429  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0308 03:55:00.475314  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0308 03:55:00.475362  947676 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0308 03:55:00.475411  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0308 03:55:00.820308  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0308 03:55:00.820352  947676 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0308 03:55:00.820423  947676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0308 03:55:01.564733  947676 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0308 03:55:01.564775  947676 cache_images.go:123] Successfully loaded all cached images
	I0308 03:55:01.564781  947676 cache_images.go:92] duration metric: took 8.240900645s to LoadCachedImages
	I0308 03:55:01.564799  947676 kubeadm.go:928] updating node { 192.168.39.18 8443 v1.24.4 crio true true} ...
	I0308 03:55:01.565114  947676 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-001336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-001336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:55:01.565212  947676 ssh_runner.go:195] Run: crio config
	I0308 03:55:01.613314  947676 cni.go:84] Creating CNI manager for ""
	I0308 03:55:01.613338  947676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 03:55:01.613356  947676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:55:01.613375  947676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-001336 NodeName:test-preload-001336 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 03:55:01.613510  947676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-001336"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:55:01.613577  947676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0308 03:55:01.624579  947676 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:55:01.624645  947676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 03:55:01.634991  947676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0308 03:55:01.653383  947676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:55:01.671381  947676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0308 03:55:01.690173  947676 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0308 03:55:01.694486  947676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:55:01.708115  947676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:55:01.832934  947676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:55:01.850225  947676 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336 for IP: 192.168.39.18
	I0308 03:55:01.850252  947676 certs.go:194] generating shared ca certs ...
	I0308 03:55:01.850299  947676 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:55:01.850474  947676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:55:01.850519  947676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:55:01.850529  947676 certs.go:256] generating profile certs ...
	I0308 03:55:01.850617  947676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/client.key
	I0308 03:55:01.850657  947676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/apiserver.key.fc3541a7
	I0308 03:55:01.850688  947676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/proxy-client.key
	I0308 03:55:01.850787  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:55:01.850817  947676 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:55:01.850828  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:55:01.850855  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:55:01.850893  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:55:01.850925  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:55:01.850987  947676 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:55:01.851638  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:55:01.886313  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:55:01.922086  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:55:01.958397  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:55:02.003057  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 03:55:02.038161  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:55:02.078175  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:55:02.103512  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 03:55:02.129033  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:55:02.153933  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:55:02.178589  947676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:55:02.203144  947676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:55:02.221036  947676 ssh_runner.go:195] Run: openssl version
	I0308 03:55:02.227229  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:55:02.239392  947676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:55:02.244403  947676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:55:02.244442  947676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:55:02.250759  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:55:02.263199  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:55:02.275166  947676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:55:02.280209  947676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:55:02.280260  947676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:55:02.286478  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:55:02.298491  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:55:02.310622  947676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:55:02.315900  947676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:55:02.315949  947676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:55:02.322186  947676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:55:02.334423  947676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:55:02.339705  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 03:55:02.346078  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 03:55:02.352236  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 03:55:02.358724  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 03:55:02.364855  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 03:55:02.371050  947676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 03:55:02.377168  947676 kubeadm.go:391] StartCluster: {Name:test-preload-001336 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-001336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:55:02.377293  947676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:55:02.377364  947676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:55:02.416903  947676 cri.go:89] found id: ""
	I0308 03:55:02.416989  947676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 03:55:02.428865  947676 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 03:55:02.428893  947676 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 03:55:02.428899  947676 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 03:55:02.428976  947676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 03:55:02.440075  947676 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:55:02.440717  947676 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-001336" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:55:02.440873  947676 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-001336" cluster setting kubeconfig missing "test-preload-001336" context setting]
	I0308 03:55:02.441215  947676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:55:02.441837  947676 kapi.go:59] client config for test-preload-001336: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 03:55:02.442478  947676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 03:55:02.454812  947676 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.18
	I0308 03:55:02.454840  947676 kubeadm.go:1153] stopping kube-system containers ...
	I0308 03:55:02.454852  947676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 03:55:02.454911  947676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:55:02.498855  947676 cri.go:89] found id: ""
	I0308 03:55:02.498924  947676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 03:55:02.518023  947676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 03:55:02.530288  947676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 03:55:02.530310  947676 kubeadm.go:156] found existing configuration files:
	
	I0308 03:55:02.530364  947676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 03:55:02.541672  947676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 03:55:02.541729  947676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 03:55:02.553431  947676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 03:55:02.564579  947676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 03:55:02.564638  947676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 03:55:02.575890  947676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 03:55:02.586540  947676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 03:55:02.586600  947676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 03:55:02.596970  947676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 03:55:02.607616  947676 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 03:55:02.607681  947676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 03:55:02.618971  947676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 03:55:02.630315  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:02.726571  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:03.675467  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:03.976659  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:04.057867  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:04.147418  947676 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:55:04.147526  947676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:55:04.648405  947676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:55:05.148184  947676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:55:05.176044  947676 api_server.go:72] duration metric: took 1.02862942s to wait for apiserver process to appear ...
	I0308 03:55:05.176083  947676 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:55:05.176104  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:05.176609  947676 api_server.go:269] stopped: https://192.168.39.18:8443/healthz: Get "https://192.168.39.18:8443/healthz": dial tcp 192.168.39.18:8443: connect: connection refused
	I0308 03:55:05.676933  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:08.695725  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 03:55:08.695758  947676 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 03:55:08.695773  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:08.707618  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 03:55:08.707647  947676 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 03:55:09.176398  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:09.192411  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0308 03:55:09.192443  947676 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0308 03:55:09.677067  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:09.682666  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0308 03:55:09.682697  947676 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0308 03:55:10.176214  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:10.181967  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0308 03:55:10.188268  947676 api_server.go:141] control plane version: v1.24.4
	I0308 03:55:10.188295  947676 api_server.go:131] duration metric: took 5.012204062s to wait for apiserver health ...
	I0308 03:55:10.188304  947676 cni.go:84] Creating CNI manager for ""
	I0308 03:55:10.188311  947676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 03:55:10.190053  947676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 03:55:10.191421  947676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 03:55:10.202337  947676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 03:55:10.224931  947676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:55:10.233760  947676 system_pods.go:59] 7 kube-system pods found
	I0308 03:55:10.233792  947676 system_pods.go:61] "coredns-6d4b75cb6d-np8cc" [1177cd1f-3a23-4d2d-b592-1c676c796e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 03:55:10.233803  947676 system_pods.go:61] "etcd-test-preload-001336" [cfb48f65-b047-4c6c-b049-85fdde8419c3] Running
	I0308 03:55:10.233811  947676 system_pods.go:61] "kube-apiserver-test-preload-001336" [7274958d-f31a-4628-a84a-6aa9e86571c2] Running
	I0308 03:55:10.233816  947676 system_pods.go:61] "kube-controller-manager-test-preload-001336" [f75ff729-e3fc-4685-8a3b-aaebf118043e] Running
	I0308 03:55:10.233823  947676 system_pods.go:61] "kube-proxy-nvwrg" [f38dcb89-468f-48f0-abd0-286c13ebbae2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 03:55:10.233840  947676 system_pods.go:61] "kube-scheduler-test-preload-001336" [c48db80c-18bf-4040-a19e-18dcf42fce6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 03:55:10.233849  947676 system_pods.go:61] "storage-provisioner" [3e5bc169-afbf-41a5-86a7-cc7f8095c375] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 03:55:10.233858  947676 system_pods.go:74] duration metric: took 8.906721ms to wait for pod list to return data ...
	I0308 03:55:10.233877  947676 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:55:10.238234  947676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:55:10.238258  947676 node_conditions.go:123] node cpu capacity is 2
	I0308 03:55:10.238273  947676 node_conditions.go:105] duration metric: took 4.389662ms to run NodePressure ...
	I0308 03:55:10.238294  947676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 03:55:10.477780  947676 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 03:55:10.481882  947676 kubeadm.go:733] kubelet initialised
	I0308 03:55:10.481903  947676 kubeadm.go:734] duration metric: took 4.101806ms waiting for restarted kubelet to initialise ...
	I0308 03:55:10.481911  947676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:55:10.490539  947676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:10.495686  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.495714  947676 pod_ready.go:81] duration metric: took 5.153032ms for pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:10.495725  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.495745  947676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:10.500571  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "etcd-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.500596  947676 pod_ready.go:81] duration metric: took 4.838759ms for pod "etcd-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:10.500607  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "etcd-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.500614  947676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:10.505817  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "kube-apiserver-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.505838  947676 pod_ready.go:81] duration metric: took 5.21069ms for pod "kube-apiserver-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:10.505845  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "kube-apiserver-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.505851  947676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:10.629363  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.629389  947676 pod_ready.go:81] duration metric: took 123.531353ms for pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:10.629400  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:10.629405  947676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvwrg" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:11.028752  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "kube-proxy-nvwrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:11.028783  947676 pod_ready.go:81] duration metric: took 399.36902ms for pod "kube-proxy-nvwrg" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:11.028792  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "kube-proxy-nvwrg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:11.028799  947676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:11.430474  947676 pod_ready.go:97] node "test-preload-001336" hosting pod "kube-scheduler-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:11.430504  947676 pod_ready.go:81] duration metric: took 401.698314ms for pod "kube-scheduler-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	E0308 03:55:11.430514  947676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-001336" hosting pod "kube-scheduler-test-preload-001336" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:11.430521  947676 pod_ready.go:38] duration metric: took 948.601675ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:55:11.430547  947676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 03:55:11.444670  947676 ops.go:34] apiserver oom_adj: -16
	I0308 03:55:11.444692  947676 kubeadm.go:591] duration metric: took 9.015786664s to restartPrimaryControlPlane
	I0308 03:55:11.444700  947676 kubeadm.go:393] duration metric: took 9.067541617s to StartCluster
	I0308 03:55:11.444716  947676 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:55:11.444791  947676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:55:11.445469  947676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:55:11.445698  947676 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:55:11.447363  947676 out.go:177] * Verifying Kubernetes components...
	I0308 03:55:11.445819  947676 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 03:55:11.445909  947676 config.go:182] Loaded profile config "test-preload-001336": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0308 03:55:11.448725  947676 addons.go:69] Setting storage-provisioner=true in profile "test-preload-001336"
	I0308 03:55:11.448733  947676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:55:11.448736  947676 addons.go:69] Setting default-storageclass=true in profile "test-preload-001336"
	I0308 03:55:11.448760  947676 addons.go:234] Setting addon storage-provisioner=true in "test-preload-001336"
	W0308 03:55:11.448770  947676 addons.go:243] addon storage-provisioner should already be in state true
	I0308 03:55:11.448773  947676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-001336"
	I0308 03:55:11.448795  947676 host.go:66] Checking if "test-preload-001336" exists ...
	I0308 03:55:11.449080  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:55:11.449120  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:55:11.449203  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:55:11.449238  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:55:11.464566  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45351
	I0308 03:55:11.464630  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35051
	I0308 03:55:11.465085  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:55:11.465165  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:55:11.465637  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:55:11.465656  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:55:11.465726  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:55:11.465751  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:55:11.465980  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:55:11.466077  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:55:11.466244  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetState
	I0308 03:55:11.466614  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:55:11.466660  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:55:11.468909  947676 kapi.go:59] client config for test-preload-001336: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/client.crt", KeyFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/profiles/test-preload-001336/client.key", CAFile:"/home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 03:55:11.469349  947676 addons.go:234] Setting addon default-storageclass=true in "test-preload-001336"
	W0308 03:55:11.469374  947676 addons.go:243] addon default-storageclass should already be in state true
	I0308 03:55:11.469403  947676 host.go:66] Checking if "test-preload-001336" exists ...
	I0308 03:55:11.469776  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:55:11.469824  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:55:11.482139  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0308 03:55:11.482683  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:55:11.483164  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:55:11.483186  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:55:11.483473  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:55:11.483652  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetState
	I0308 03:55:11.484449  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35697
	I0308 03:55:11.484811  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:55:11.485311  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:55:11.485336  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:55:11.485630  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:55:11.485685  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:55:11.487667  947676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:55:11.486185  947676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:55:11.489089  947676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:55:11.489182  947676 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:55:11.489208  947676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 03:55:11.489229  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:55:11.492159  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:55:11.492609  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:55:11.492636  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:55:11.492930  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:55:11.493128  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:55:11.493312  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:55:11.493469  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:55:11.503395  947676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0308 03:55:11.503842  947676 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:55:11.504315  947676 main.go:141] libmachine: Using API Version  1
	I0308 03:55:11.504342  947676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:55:11.504648  947676 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:55:11.504815  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetState
	I0308 03:55:11.506212  947676 main.go:141] libmachine: (test-preload-001336) Calling .DriverName
	I0308 03:55:11.506438  947676 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 03:55:11.506454  947676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 03:55:11.506470  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHHostname
	I0308 03:55:11.509524  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:55:11.509976  947676 main.go:141] libmachine: (test-preload-001336) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:56:12", ip: ""} in network mk-test-preload-001336: {Iface:virbr1 ExpiryTime:2024-03-08 04:54:40 +0000 UTC Type:0 Mac:52:54:00:36:56:12 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-001336 Clientid:01:52:54:00:36:56:12}
	I0308 03:55:11.510009  947676 main.go:141] libmachine: (test-preload-001336) DBG | domain test-preload-001336 has defined IP address 192.168.39.18 and MAC address 52:54:00:36:56:12 in network mk-test-preload-001336
	I0308 03:55:11.510153  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHPort
	I0308 03:55:11.510326  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHKeyPath
	I0308 03:55:11.510490  947676 main.go:141] libmachine: (test-preload-001336) Calling .GetSSHUsername
	I0308 03:55:11.510609  947676 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/test-preload-001336/id_rsa Username:docker}
	I0308 03:55:11.640791  947676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:55:11.659870  947676 node_ready.go:35] waiting up to 6m0s for node "test-preload-001336" to be "Ready" ...
	I0308 03:55:11.724606  947676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 03:55:11.743579  947676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 03:55:12.670598  947676 main.go:141] libmachine: Making call to close driver server
	I0308 03:55:12.670634  947676 main.go:141] libmachine: (test-preload-001336) Calling .Close
	I0308 03:55:12.670672  947676 main.go:141] libmachine: Making call to close driver server
	I0308 03:55:12.670692  947676 main.go:141] libmachine: (test-preload-001336) Calling .Close
	I0308 03:55:12.670947  947676 main.go:141] libmachine: (test-preload-001336) DBG | Closing plugin on server side
	I0308 03:55:12.670982  947676 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:55:12.670998  947676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:55:12.671007  947676 main.go:141] libmachine: Making call to close driver server
	I0308 03:55:12.671007  947676 main.go:141] libmachine: (test-preload-001336) DBG | Closing plugin on server side
	I0308 03:55:12.671014  947676 main.go:141] libmachine: (test-preload-001336) Calling .Close
	I0308 03:55:12.671023  947676 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:55:12.671032  947676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:55:12.671039  947676 main.go:141] libmachine: Making call to close driver server
	I0308 03:55:12.671046  947676 main.go:141] libmachine: (test-preload-001336) Calling .Close
	I0308 03:55:12.671211  947676 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:55:12.671227  947676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:55:12.672735  947676 main.go:141] libmachine: (test-preload-001336) DBG | Closing plugin on server side
	I0308 03:55:12.672753  947676 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:55:12.672768  947676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:55:12.683860  947676 main.go:141] libmachine: Making call to close driver server
	I0308 03:55:12.683880  947676 main.go:141] libmachine: (test-preload-001336) Calling .Close
	I0308 03:55:12.684170  947676 main.go:141] libmachine: Successfully made call to close driver server
	I0308 03:55:12.684191  947676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 03:55:12.684192  947676 main.go:141] libmachine: (test-preload-001336) DBG | Closing plugin on server side
	I0308 03:55:12.686073  947676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 03:55:12.687372  947676 addons.go:505] duration metric: took 1.241572539s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 03:55:13.664055  947676 node_ready.go:53] node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:15.666061  947676 node_ready.go:53] node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:18.164289  947676 node_ready.go:53] node "test-preload-001336" has status "Ready":"False"
	I0308 03:55:19.164490  947676 node_ready.go:49] node "test-preload-001336" has status "Ready":"True"
	I0308 03:55:19.164515  947676 node_ready.go:38] duration metric: took 7.504613684s for node "test-preload-001336" to be "Ready" ...
	I0308 03:55:19.164524  947676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:55:19.169362  947676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:19.174683  947676 pod_ready.go:92] pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:19.174708  947676 pod_ready.go:81] duration metric: took 5.321291ms for pod "coredns-6d4b75cb6d-np8cc" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:19.174716  947676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:21.181789  947676 pod_ready.go:102] pod "etcd-test-preload-001336" in "kube-system" namespace has status "Ready":"False"
	I0308 03:55:23.682227  947676 pod_ready.go:102] pod "etcd-test-preload-001336" in "kube-system" namespace has status "Ready":"False"
	I0308 03:55:24.681361  947676 pod_ready.go:92] pod "etcd-test-preload-001336" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:24.681384  947676 pod_ready.go:81] duration metric: took 5.506661808s for pod "etcd-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.681395  947676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.686269  947676 pod_ready.go:92] pod "kube-apiserver-test-preload-001336" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:24.686297  947676 pod_ready.go:81] duration metric: took 4.894966ms for pod "kube-apiserver-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.686310  947676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.690418  947676 pod_ready.go:92] pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:24.690435  947676 pod_ready.go:81] duration metric: took 4.117664ms for pod "kube-controller-manager-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.690443  947676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nvwrg" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.694468  947676 pod_ready.go:92] pod "kube-proxy-nvwrg" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:24.694482  947676 pod_ready.go:81] duration metric: took 4.033969ms for pod "kube-proxy-nvwrg" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.694489  947676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.699780  947676 pod_ready.go:92] pod "kube-scheduler-test-preload-001336" in "kube-system" namespace has status "Ready":"True"
	I0308 03:55:24.699794  947676 pod_ready.go:81] duration metric: took 5.299901ms for pod "kube-scheduler-test-preload-001336" in "kube-system" namespace to be "Ready" ...
	I0308 03:55:24.699801  947676 pod_ready.go:38] duration metric: took 5.535268121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 03:55:24.699816  947676 api_server.go:52] waiting for apiserver process to appear ...
	I0308 03:55:24.699872  947676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:55:24.716506  947676 api_server.go:72] duration metric: took 13.270776785s to wait for apiserver process to appear ...
	I0308 03:55:24.716525  947676 api_server.go:88] waiting for apiserver healthz status ...
	I0308 03:55:24.716539  947676 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I0308 03:55:24.724922  947676 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I0308 03:55:24.725899  947676 api_server.go:141] control plane version: v1.24.4
	I0308 03:55:24.725922  947676 api_server.go:131] duration metric: took 9.390102ms to wait for apiserver health ...
	I0308 03:55:24.725931  947676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 03:55:24.881534  947676 system_pods.go:59] 7 kube-system pods found
	I0308 03:55:24.881563  947676 system_pods.go:61] "coredns-6d4b75cb6d-np8cc" [1177cd1f-3a23-4d2d-b592-1c676c796e18] Running
	I0308 03:55:24.881567  947676 system_pods.go:61] "etcd-test-preload-001336" [cfb48f65-b047-4c6c-b049-85fdde8419c3] Running
	I0308 03:55:24.881571  947676 system_pods.go:61] "kube-apiserver-test-preload-001336" [7274958d-f31a-4628-a84a-6aa9e86571c2] Running
	I0308 03:55:24.881575  947676 system_pods.go:61] "kube-controller-manager-test-preload-001336" [f75ff729-e3fc-4685-8a3b-aaebf118043e] Running
	I0308 03:55:24.881578  947676 system_pods.go:61] "kube-proxy-nvwrg" [f38dcb89-468f-48f0-abd0-286c13ebbae2] Running
	I0308 03:55:24.881580  947676 system_pods.go:61] "kube-scheduler-test-preload-001336" [c48db80c-18bf-4040-a19e-18dcf42fce6b] Running
	I0308 03:55:24.881583  947676 system_pods.go:61] "storage-provisioner" [3e5bc169-afbf-41a5-86a7-cc7f8095c375] Running
	I0308 03:55:24.881589  947676 system_pods.go:74] duration metric: took 155.651294ms to wait for pod list to return data ...
	I0308 03:55:24.881598  947676 default_sa.go:34] waiting for default service account to be created ...
	I0308 03:55:25.078742  947676 default_sa.go:45] found service account: "default"
	I0308 03:55:25.078769  947676 default_sa.go:55] duration metric: took 197.164603ms for default service account to be created ...
	I0308 03:55:25.078777  947676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 03:55:25.281788  947676 system_pods.go:86] 7 kube-system pods found
	I0308 03:55:25.281825  947676 system_pods.go:89] "coredns-6d4b75cb6d-np8cc" [1177cd1f-3a23-4d2d-b592-1c676c796e18] Running
	I0308 03:55:25.281833  947676 system_pods.go:89] "etcd-test-preload-001336" [cfb48f65-b047-4c6c-b049-85fdde8419c3] Running
	I0308 03:55:25.281839  947676 system_pods.go:89] "kube-apiserver-test-preload-001336" [7274958d-f31a-4628-a84a-6aa9e86571c2] Running
	I0308 03:55:25.281845  947676 system_pods.go:89] "kube-controller-manager-test-preload-001336" [f75ff729-e3fc-4685-8a3b-aaebf118043e] Running
	I0308 03:55:25.281854  947676 system_pods.go:89] "kube-proxy-nvwrg" [f38dcb89-468f-48f0-abd0-286c13ebbae2] Running
	I0308 03:55:25.281859  947676 system_pods.go:89] "kube-scheduler-test-preload-001336" [c48db80c-18bf-4040-a19e-18dcf42fce6b] Running
	I0308 03:55:25.281865  947676 system_pods.go:89] "storage-provisioner" [3e5bc169-afbf-41a5-86a7-cc7f8095c375] Running
	I0308 03:55:25.281875  947676 system_pods.go:126] duration metric: took 203.091986ms to wait for k8s-apps to be running ...
	I0308 03:55:25.281888  947676 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 03:55:25.281945  947676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:55:25.299834  947676 system_svc.go:56] duration metric: took 17.932008ms WaitForService to wait for kubelet
	I0308 03:55:25.299867  947676 kubeadm.go:576] duration metric: took 13.854139002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 03:55:25.299888  947676 node_conditions.go:102] verifying NodePressure condition ...
	I0308 03:55:25.479234  947676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 03:55:25.479259  947676 node_conditions.go:123] node cpu capacity is 2
	I0308 03:55:25.479272  947676 node_conditions.go:105] duration metric: took 179.378394ms to run NodePressure ...
	I0308 03:55:25.479287  947676 start.go:240] waiting for startup goroutines ...
	I0308 03:55:25.479297  947676 start.go:245] waiting for cluster config update ...
	I0308 03:55:25.479320  947676 start.go:254] writing updated cluster config ...
	I0308 03:55:25.479625  947676 ssh_runner.go:195] Run: rm -f paused
	I0308 03:55:25.529238  947676 start.go:600] kubectl: 1.29.2, cluster: 1.24.4 (minor skew: 5)
	I0308 03:55:25.531118  947676 out.go:177] 
	W0308 03:55:25.532342  947676 out.go:239] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0308 03:55:25.533454  947676 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0308 03:55:25.534658  947676 out.go:177] * Done! kubectl is now configured to use "test-preload-001336" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.516440564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870126516418294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faabf3e6-9092-4009-a9a2-aeb875e2c287 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.517018431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bcfcb4e-063e-4c7f-999f-3699d92ba05f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.517122894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bcfcb4e-063e-4c7f-999f-3699d92ba05f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.517310088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:169cc1648f7bdf5162fb4dd78239d569fbc6a30d99f206a227442276fc04bc1a,PodSandboxId:0b1e88369dd0baf2a5bddeb96386b4483867d1d2b3b0b29c2fbd75fd469587fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1709870117223440338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-np8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1177cd1f-3a23-4d2d-b592-1c676c796e18,},Annotations:map[string]string{io.kubernetes.container.hash: 403a3715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a265264219cff45f772ea9ffeca85640d101fbadb535c7223e8f332d23d18,PodSandboxId:0957b64551ec9917f4a510ab8c95373d43a6d52a9a0046efb096e8b49c913b32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1709870109879812854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nvwrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f38dcb89-468f-48f0-abd0-286c13ebbae2,},Annotations:map[string]string{io.kubernetes.container.hash: e7cea4da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce3dd14ab0cba776dbb5ccc5a48a125498b13dc0abd65b9c2113749f0c4c145,PodSandboxId:48047421372c7efde46ef6ff2fd1932265fd6a817b8231b939d329e2326b3475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709870109512720829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
5bc169-afbf-41a5-86a7-cc7f8095c375,},Annotations:map[string]string{io.kubernetes.container.hash: 6da87b86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9e42333b835450f03895aa9cb7909fde65d241945912eb6c5f21ca9684694,PodSandboxId:c0a0cf2f9cc23001c783471f49e6666de63a60b63413faa53c7955c70c82a944,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1709870104999836993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1008123
ce4a5fce668cc4526fa38fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f3869ae0ee05b8acf38e49272cf0b8ddd9b554b9b4b89e3af776937b294fff,PodSandboxId:bf1d74048adfa39b43e2a5d9a01d8d88e05b003522aea6ab209111fade673354,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1709870104972828002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba98e368ba1295bea04d7c82abdd222,},Annotations:map
[string]string{io.kubernetes.container.hash: 8d77f62c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660314882a37c770e6a61659e38083f41733caeca15577ddf47f7d896fea795b,PodSandboxId:c2e70bfc2d89cc443f1a68e4fa5c7c26a145833487c2f705b7e1f2c7e495b777,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1709870104908281954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baecf7dce379e167e53cb9064a0e657,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8147002691f1d3d98eba052ca2d36980f03543d0102fd52fa03b369bd5b0de9f,PodSandboxId:23b98e79ef05ed6a067c4387e5e3f8bbf7c2ac62ce895e5c0e6f3a9389e99437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1709870104888571540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095c91857401f028e8279a42aabc3031,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9cf0ad43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bcfcb4e-063e-4c7f-999f-3699d92ba05f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.560540243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8856ff60-139e-4f94-af53-e27745b1094c name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.560645417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8856ff60-139e-4f94-af53-e27745b1094c name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.562581357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7339a496-10ab-474c-947e-f1de0fce3b7b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.563079106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870126563056264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7339a496-10ab-474c-947e-f1de0fce3b7b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.563809439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e6bb561-1845-4b37-a88b-ae161df7afa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.563860569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e6bb561-1845-4b37-a88b-ae161df7afa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.564264363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:169cc1648f7bdf5162fb4dd78239d569fbc6a30d99f206a227442276fc04bc1a,PodSandboxId:0b1e88369dd0baf2a5bddeb96386b4483867d1d2b3b0b29c2fbd75fd469587fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1709870117223440338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-np8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1177cd1f-3a23-4d2d-b592-1c676c796e18,},Annotations:map[string]string{io.kubernetes.container.hash: 403a3715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a265264219cff45f772ea9ffeca85640d101fbadb535c7223e8f332d23d18,PodSandboxId:0957b64551ec9917f4a510ab8c95373d43a6d52a9a0046efb096e8b49c913b32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1709870109879812854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nvwrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f38dcb89-468f-48f0-abd0-286c13ebbae2,},Annotations:map[string]string{io.kubernetes.container.hash: e7cea4da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce3dd14ab0cba776dbb5ccc5a48a125498b13dc0abd65b9c2113749f0c4c145,PodSandboxId:48047421372c7efde46ef6ff2fd1932265fd6a817b8231b939d329e2326b3475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709870109512720829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
5bc169-afbf-41a5-86a7-cc7f8095c375,},Annotations:map[string]string{io.kubernetes.container.hash: 6da87b86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9e42333b835450f03895aa9cb7909fde65d241945912eb6c5f21ca9684694,PodSandboxId:c0a0cf2f9cc23001c783471f49e6666de63a60b63413faa53c7955c70c82a944,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1709870104999836993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1008123
ce4a5fce668cc4526fa38fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f3869ae0ee05b8acf38e49272cf0b8ddd9b554b9b4b89e3af776937b294fff,PodSandboxId:bf1d74048adfa39b43e2a5d9a01d8d88e05b003522aea6ab209111fade673354,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1709870104972828002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba98e368ba1295bea04d7c82abdd222,},Annotations:map
[string]string{io.kubernetes.container.hash: 8d77f62c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660314882a37c770e6a61659e38083f41733caeca15577ddf47f7d896fea795b,PodSandboxId:c2e70bfc2d89cc443f1a68e4fa5c7c26a145833487c2f705b7e1f2c7e495b777,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1709870104908281954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baecf7dce379e167e53cb9064a0e657,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8147002691f1d3d98eba052ca2d36980f03543d0102fd52fa03b369bd5b0de9f,PodSandboxId:23b98e79ef05ed6a067c4387e5e3f8bbf7c2ac62ce895e5c0e6f3a9389e99437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1709870104888571540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095c91857401f028e8279a42aabc3031,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9cf0ad43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e6bb561-1845-4b37-a88b-ae161df7afa4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.607604813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=015ee0d4-41f8-4e40-9ad5-ae5b6d2ac09a name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.607711409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=015ee0d4-41f8-4e40-9ad5-ae5b6d2ac09a name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.609254137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=393afa03-8f5a-472b-b834-4abf05ac4179 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.609696223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870126609674127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=393afa03-8f5a-472b-b834-4abf05ac4179 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.610527053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b1f9dee-9f49-498a-b047-a944437ea953 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.610614556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b1f9dee-9f49-498a-b047-a944437ea953 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.610786252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:169cc1648f7bdf5162fb4dd78239d569fbc6a30d99f206a227442276fc04bc1a,PodSandboxId:0b1e88369dd0baf2a5bddeb96386b4483867d1d2b3b0b29c2fbd75fd469587fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1709870117223440338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-np8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1177cd1f-3a23-4d2d-b592-1c676c796e18,},Annotations:map[string]string{io.kubernetes.container.hash: 403a3715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a265264219cff45f772ea9ffeca85640d101fbadb535c7223e8f332d23d18,PodSandboxId:0957b64551ec9917f4a510ab8c95373d43a6d52a9a0046efb096e8b49c913b32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1709870109879812854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nvwrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f38dcb89-468f-48f0-abd0-286c13ebbae2,},Annotations:map[string]string{io.kubernetes.container.hash: e7cea4da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce3dd14ab0cba776dbb5ccc5a48a125498b13dc0abd65b9c2113749f0c4c145,PodSandboxId:48047421372c7efde46ef6ff2fd1932265fd6a817b8231b939d329e2326b3475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709870109512720829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
5bc169-afbf-41a5-86a7-cc7f8095c375,},Annotations:map[string]string{io.kubernetes.container.hash: 6da87b86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9e42333b835450f03895aa9cb7909fde65d241945912eb6c5f21ca9684694,PodSandboxId:c0a0cf2f9cc23001c783471f49e6666de63a60b63413faa53c7955c70c82a944,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1709870104999836993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1008123
ce4a5fce668cc4526fa38fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f3869ae0ee05b8acf38e49272cf0b8ddd9b554b9b4b89e3af776937b294fff,PodSandboxId:bf1d74048adfa39b43e2a5d9a01d8d88e05b003522aea6ab209111fade673354,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1709870104972828002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba98e368ba1295bea04d7c82abdd222,},Annotations:map
[string]string{io.kubernetes.container.hash: 8d77f62c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660314882a37c770e6a61659e38083f41733caeca15577ddf47f7d896fea795b,PodSandboxId:c2e70bfc2d89cc443f1a68e4fa5c7c26a145833487c2f705b7e1f2c7e495b777,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1709870104908281954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baecf7dce379e167e53cb9064a0e657,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8147002691f1d3d98eba052ca2d36980f03543d0102fd52fa03b369bd5b0de9f,PodSandboxId:23b98e79ef05ed6a067c4387e5e3f8bbf7c2ac62ce895e5c0e6f3a9389e99437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1709870104888571540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095c91857401f028e8279a42aabc3031,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9cf0ad43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b1f9dee-9f49-498a-b047-a944437ea953 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.647695052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=046ca4f6-a8a1-49b8-b422-fd86dc54851a name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.647788714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=046ca4f6-a8a1-49b8-b422-fd86dc54851a name=/runtime.v1.RuntimeService/Version
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.649037687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cdc3b39-aa07-41c0-bd8e-f394b72452c8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.649732104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870126649436806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cdc3b39-aa07-41c0-bd8e-f394b72452c8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.650554849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4da7ed6d-169f-4cc4-a300-df2dc0057fa9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.650740854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4da7ed6d-169f-4cc4-a300-df2dc0057fa9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 03:55:26 test-preload-001336 crio[667]: time="2024-03-08 03:55:26.651086650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:169cc1648f7bdf5162fb4dd78239d569fbc6a30d99f206a227442276fc04bc1a,PodSandboxId:0b1e88369dd0baf2a5bddeb96386b4483867d1d2b3b0b29c2fbd75fd469587fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1709870117223440338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-np8cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1177cd1f-3a23-4d2d-b592-1c676c796e18,},Annotations:map[string]string{io.kubernetes.container.hash: 403a3715,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7a265264219cff45f772ea9ffeca85640d101fbadb535c7223e8f332d23d18,PodSandboxId:0957b64551ec9917f4a510ab8c95373d43a6d52a9a0046efb096e8b49c913b32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1709870109879812854,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nvwrg,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: f38dcb89-468f-48f0-abd0-286c13ebbae2,},Annotations:map[string]string{io.kubernetes.container.hash: e7cea4da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce3dd14ab0cba776dbb5ccc5a48a125498b13dc0abd65b9c2113749f0c4c145,PodSandboxId:48047421372c7efde46ef6ff2fd1932265fd6a817b8231b939d329e2326b3475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709870109512720829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
5bc169-afbf-41a5-86a7-cc7f8095c375,},Annotations:map[string]string{io.kubernetes.container.hash: 6da87b86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9e42333b835450f03895aa9cb7909fde65d241945912eb6c5f21ca9684694,PodSandboxId:c0a0cf2f9cc23001c783471f49e6666de63a60b63413faa53c7955c70c82a944,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1709870104999836993,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1008123
ce4a5fce668cc4526fa38fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f3869ae0ee05b8acf38e49272cf0b8ddd9b554b9b4b89e3af776937b294fff,PodSandboxId:bf1d74048adfa39b43e2a5d9a01d8d88e05b003522aea6ab209111fade673354,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1709870104972828002,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ba98e368ba1295bea04d7c82abdd222,},Annotations:map
[string]string{io.kubernetes.container.hash: 8d77f62c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:660314882a37c770e6a61659e38083f41733caeca15577ddf47f7d896fea795b,PodSandboxId:c2e70bfc2d89cc443f1a68e4fa5c7c26a145833487c2f705b7e1f2c7e495b777,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1709870104908281954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baecf7dce379e167e53cb9064a0e657,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8147002691f1d3d98eba052ca2d36980f03543d0102fd52fa03b369bd5b0de9f,PodSandboxId:23b98e79ef05ed6a067c4387e5e3f8bbf7c2ac62ce895e5c0e6f3a9389e99437,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1709870104888571540,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-001336,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095c91857401f028e8279a42aabc3031,},Annotation
s:map[string]string{io.kubernetes.container.hash: 9cf0ad43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4da7ed6d-169f-4cc4-a300-df2dc0057fa9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	169cc1648f7bd       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   0b1e88369dd0b       coredns-6d4b75cb6d-np8cc
	9f7a265264219       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   0957b64551ec9       kube-proxy-nvwrg
	7ce3dd14ab0cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   48047421372c7       storage-provisioner
	eaa9e42333b83       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   c0a0cf2f9cc23       kube-scheduler-test-preload-001336
	f1f3869ae0ee0       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   bf1d74048adfa       etcd-test-preload-001336
	660314882a37c       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   c2e70bfc2d89c       kube-controller-manager-test-preload-001336
	8147002691f1d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   23b98e79ef05e       kube-apiserver-test-preload-001336
	
	
	==> coredns [169cc1648f7bdf5162fb4dd78239d569fbc6a30d99f206a227442276fc04bc1a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38589 - 37676 "HINFO IN 1745630605842297137.705614836442685541. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009445s
	
	
	==> describe nodes <==
	Name:               test-preload-001336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-001336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=test-preload-001336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T03_53_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 03:53:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-001336
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 03:55:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 03:55:18 +0000   Fri, 08 Mar 2024 03:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 03:55:18 +0000   Fri, 08 Mar 2024 03:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 03:55:18 +0000   Fri, 08 Mar 2024 03:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 03:55:18 +0000   Fri, 08 Mar 2024 03:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    test-preload-001336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 50722c0d07b349ea887effcfa56ffa21
	  System UUID:                50722c0d-07b3-49ea-887e-ffcfa56ffa21
	  Boot ID:                    68cddb8f-dd62-4ced-8e6f-041645d0b73d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-np8cc                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-test-preload-001336                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 kube-apiserver-test-preload-001336             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-test-preload-001336    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-nvwrg                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-test-preload-001336             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node test-preload-001336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node test-preload-001336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                kubelet          Node test-preload-001336 status is now: NodeHasSufficientPID
	  Normal  NodeReady                89s                kubelet          Node test-preload-001336 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node test-preload-001336 event: Registered Node test-preload-001336 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-001336 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-001336 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-001336 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-001336 event: Registered Node test-preload-001336 in Controller
	
	
	==> dmesg <==
	[Mar 8 03:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052145] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043647] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.511049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387722] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.739978] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.490259] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.058126] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058976] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.180927] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.162892] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.270041] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Mar 8 03:55] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.063256] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.069400] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +5.583356] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.055101] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +5.541175] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [f1f3869ae0ee05b8acf38e49272cf0b8ddd9b554b9b4b89e3af776937b294fff] <==
	{"level":"info","ts":"2024-03-08T03:55:05.413Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d6d01a71dfc61a14","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-08T03:55:05.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 switched to configuration voters=(15478900995660323348)"}
	{"level":"info","ts":"2024-03-08T03:55:05.413Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3959cc3c468ccbd1","local-member-id":"d6d01a71dfc61a14","added-peer-id":"d6d01a71dfc61a14","added-peer-peer-urls":["https://192.168.39.18:2380"]}
	{"level":"info","ts":"2024-03-08T03:55:05.416Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3959cc3c468ccbd1","local-member-id":"d6d01a71dfc61a14","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:55:05.416Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T03:55:05.420Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d6d01a71dfc61a14","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-08T03:55:05.424Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T03:55:05.429Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d6d01a71dfc61a14","initial-advertise-peer-urls":["https://192.168.39.18:2380"],"listen-peer-urls":["https://192.168.39.18:2380"],"advertise-client-urls":["https://192.168.39.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T03:55:05.429Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T03:55:05.429Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-03-08T03:55:05.429Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgPreVoteResp from d6d01a71dfc61a14 at term 2"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgVoteResp from d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T03:55:05.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d6d01a71dfc61a14 elected leader d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2024-03-08T03:55:05.767Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d6d01a71dfc61a14","local-member-attributes":"{Name:test-preload-001336 ClientURLs:[https://192.168.39.18:2379]}","request-path":"/0/members/d6d01a71dfc61a14/attributes","cluster-id":"3959cc3c468ccbd1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T03:55:05.772Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:55:05.772Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T03:55:05.775Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T03:55:05.792Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.18:2379"}
	{"level":"info","ts":"2024-03-08T03:55:05.772Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T03:55:05.793Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:55:26 up 0 min,  0 users,  load average: 0.55, 0.15, 0.05
	Linux test-preload-001336 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8147002691f1d3d98eba052ca2d36980f03543d0102fd52fa03b369bd5b0de9f] <==
	I0308 03:55:08.651473       1 establishing_controller.go:76] Starting EstablishingController
	I0308 03:55:08.651483       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0308 03:55:08.651639       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0308 03:55:08.651729       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 03:55:08.651749       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 03:55:08.680518       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0308 03:55:08.727623       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 03:55:08.735691       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0308 03:55:08.736802       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0308 03:55:08.739787       1 cache.go:39] Caches are synced for autoregister controller
	E0308 03:55:08.763806       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0308 03:55:08.765640       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0308 03:55:08.781519       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0308 03:55:08.804770       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 03:55:08.824342       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 03:55:09.314705       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0308 03:55:09.627556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 03:55:10.131494       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0308 03:55:10.386028       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0308 03:55:10.395889       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0308 03:55:10.429740       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0308 03:55:10.452405       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 03:55:10.458856       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 03:55:21.244127       1 controller.go:611] quota admission added evaluator for: endpoints
	I0308 03:55:21.257121       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [660314882a37c770e6a61659e38083f41733caeca15577ddf47f7d896fea795b] <==
	I0308 03:55:21.078143       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0308 03:55:21.078216       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-001336. Assuming now as a timestamp.
	I0308 03:55:21.078264       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0308 03:55:21.078401       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0308 03:55:21.078581       1 event.go:294] "Event occurred" object="test-preload-001336" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-001336 event: Registered Node test-preload-001336 in Controller"
	I0308 03:55:21.090486       1 shared_informer.go:262] Caches are synced for node
	I0308 03:55:21.090632       1 range_allocator.go:173] Starting range CIDR allocator
	I0308 03:55:21.090738       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0308 03:55:21.090768       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0308 03:55:21.105079       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0308 03:55:21.109288       1 shared_informer.go:262] Caches are synced for crt configmap
	I0308 03:55:21.156155       1 shared_informer.go:262] Caches are synced for deployment
	I0308 03:55:21.171515       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0308 03:55:21.215239       1 shared_informer.go:262] Caches are synced for disruption
	I0308 03:55:21.215278       1 disruption.go:371] Sending events to api server.
	I0308 03:55:21.236964       1 shared_informer.go:262] Caches are synced for endpoint
	I0308 03:55:21.249145       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0308 03:55:21.258067       1 shared_informer.go:262] Caches are synced for daemon sets
	I0308 03:55:21.263551       1 shared_informer.go:262] Caches are synced for resource quota
	I0308 03:55:21.276131       1 shared_informer.go:262] Caches are synced for stateful set
	I0308 03:55:21.300260       1 shared_informer.go:262] Caches are synced for resource quota
	I0308 03:55:21.311104       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0308 03:55:21.728824       1 shared_informer.go:262] Caches are synced for garbage collector
	I0308 03:55:21.736047       1 shared_informer.go:262] Caches are synced for garbage collector
	I0308 03:55:21.736085       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [9f7a265264219cff45f772ea9ffeca85640d101fbadb535c7223e8f332d23d18] <==
	I0308 03:55:10.085838       1 node.go:163] Successfully retrieved node IP: 192.168.39.18
	I0308 03:55:10.086010       1 server_others.go:138] "Detected node IP" address="192.168.39.18"
	I0308 03:55:10.086062       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0308 03:55:10.120309       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0308 03:55:10.120376       1 server_others.go:206] "Using iptables Proxier"
	I0308 03:55:10.120635       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0308 03:55:10.121477       1 server.go:661] "Version info" version="v1.24.4"
	I0308 03:55:10.121522       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:55:10.123527       1 config.go:317] "Starting service config controller"
	I0308 03:55:10.123571       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0308 03:55:10.123601       1 config.go:226] "Starting endpoint slice config controller"
	I0308 03:55:10.123617       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0308 03:55:10.124656       1 config.go:444] "Starting node config controller"
	I0308 03:55:10.125568       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0308 03:55:10.223894       1 shared_informer.go:262] Caches are synced for service config
	I0308 03:55:10.224118       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0308 03:55:10.227013       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [eaa9e42333b835450f03895aa9cb7909fde65d241945912eb6c5f21ca9684694] <==
	I0308 03:55:06.461750       1 serving.go:348] Generated self-signed cert in-memory
	W0308 03:55:08.693333       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 03:55:08.693386       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 03:55:08.693399       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 03:55:08.693407       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 03:55:08.739068       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0308 03:55:08.739112       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 03:55:08.748732       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0308 03:55:08.749735       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 03:55:08.749789       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 03:55:08.749817       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 03:55:08.850847       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.107247    1066 apiserver.go:52] "Watching apiserver"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.110809    1066 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.110985    1066 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.111061    1066 topology_manager.go:200] "Topology Admit Handler"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.113598    1066 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-np8cc" podUID=1177cd1f-3a23-4d2d-b592-1c676c796e18
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151554    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e5bc169-afbf-41a5-86a7-cc7f8095c375-tmp\") pod \"storage-provisioner\" (UID: \"3e5bc169-afbf-41a5-86a7-cc7f8095c375\") " pod="kube-system/storage-provisioner"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151618    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume\") pod \"coredns-6d4b75cb6d-np8cc\" (UID: \"1177cd1f-3a23-4d2d-b592-1c676c796e18\") " pod="kube-system/coredns-6d4b75cb6d-np8cc"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151648    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f38dcb89-468f-48f0-abd0-286c13ebbae2-kube-proxy\") pod \"kube-proxy-nvwrg\" (UID: \"f38dcb89-468f-48f0-abd0-286c13ebbae2\") " pod="kube-system/kube-proxy-nvwrg"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151666    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzk24\" (UniqueName: \"kubernetes.io/projected/f38dcb89-468f-48f0-abd0-286c13ebbae2-kube-api-access-mzk24\") pod \"kube-proxy-nvwrg\" (UID: \"f38dcb89-468f-48f0-abd0-286c13ebbae2\") " pod="kube-system/kube-proxy-nvwrg"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151684    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f38dcb89-468f-48f0-abd0-286c13ebbae2-xtables-lock\") pod \"kube-proxy-nvwrg\" (UID: \"f38dcb89-468f-48f0-abd0-286c13ebbae2\") " pod="kube-system/kube-proxy-nvwrg"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151700    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f38dcb89-468f-48f0-abd0-286c13ebbae2-lib-modules\") pod \"kube-proxy-nvwrg\" (UID: \"f38dcb89-468f-48f0-abd0-286c13ebbae2\") " pod="kube-system/kube-proxy-nvwrg"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151719    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m65\" (UniqueName: \"kubernetes.io/projected/1177cd1f-3a23-4d2d-b592-1c676c796e18-kube-api-access-c4m65\") pod \"coredns-6d4b75cb6d-np8cc\" (UID: \"1177cd1f-3a23-4d2d-b592-1c676c796e18\") " pod="kube-system/coredns-6d4b75cb6d-np8cc"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151740    1066 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjfn\" (UniqueName: \"kubernetes.io/projected/3e5bc169-afbf-41a5-86a7-cc7f8095c375-kube-api-access-wqjfn\") pod \"storage-provisioner\" (UID: \"3e5bc169-afbf-41a5-86a7-cc7f8095c375\") " pod="kube-system/storage-provisioner"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: I0308 03:55:09.151752    1066 reconciler.go:159] "Reconciler: start to sync state"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.166029    1066 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.257247    1066 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.257597    1066 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume podName:1177cd1f-3a23-4d2d-b592-1c676c796e18 nodeName:}" failed. No retries permitted until 2024-03-08 03:55:09.757484234 +0000 UTC m=+5.789966669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume") pod "coredns-6d4b75cb6d-np8cc" (UID: "1177cd1f-3a23-4d2d-b592-1c676c796e18") : object "kube-system"/"coredns" not registered
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.761061    1066 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 08 03:55:09 test-preload-001336 kubelet[1066]: E0308 03:55:09.761168    1066 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume podName:1177cd1f-3a23-4d2d-b592-1c676c796e18 nodeName:}" failed. No retries permitted until 2024-03-08 03:55:10.761148504 +0000 UTC m=+6.793630940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume") pod "coredns-6d4b75cb6d-np8cc" (UID: "1177cd1f-3a23-4d2d-b592-1c676c796e18") : object "kube-system"/"coredns" not registered
	Mar 08 03:55:10 test-preload-001336 kubelet[1066]: E0308 03:55:10.769725    1066 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 08 03:55:10 test-preload-001336 kubelet[1066]: E0308 03:55:10.769822    1066 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume podName:1177cd1f-3a23-4d2d-b592-1c676c796e18 nodeName:}" failed. No retries permitted until 2024-03-08 03:55:12.76980628 +0000 UTC m=+8.802288715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume") pod "coredns-6d4b75cb6d-np8cc" (UID: "1177cd1f-3a23-4d2d-b592-1c676c796e18") : object "kube-system"/"coredns" not registered
	Mar 08 03:55:11 test-preload-001336 kubelet[1066]: E0308 03:55:11.228031    1066 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-np8cc" podUID=1177cd1f-3a23-4d2d-b592-1c676c796e18
	Mar 08 03:55:12 test-preload-001336 kubelet[1066]: E0308 03:55:12.784862    1066 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 08 03:55:12 test-preload-001336 kubelet[1066]: E0308 03:55:12.784992    1066 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume podName:1177cd1f-3a23-4d2d-b592-1c676c796e18 nodeName:}" failed. No retries permitted until 2024-03-08 03:55:16.784973742 +0000 UTC m=+12.817456190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1177cd1f-3a23-4d2d-b592-1c676c796e18-config-volume") pod "coredns-6d4b75cb6d-np8cc" (UID: "1177cd1f-3a23-4d2d-b592-1c676c796e18") : object "kube-system"/"coredns" not registered
	Mar 08 03:55:13 test-preload-001336 kubelet[1066]: E0308 03:55:13.227724    1066 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-np8cc" podUID=1177cd1f-3a23-4d2d-b592-1c676c796e18
	
	
	==> storage-provisioner [7ce3dd14ab0cba776dbb5ccc5a48a125498b13dc0abd65b9c2113749f0c4c145] <==
	I0308 03:55:09.592528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-001336 -n test-preload-001336
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-001336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-001336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-001336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-001336: (1.159890546s)
--- FAIL: TestPreload (181.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (401.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.26354412s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-219954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-219954" primary control-plane node in "kubernetes-upgrade-219954" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:57:25.137792  949014 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:57:25.138085  949014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:57:25.138096  949014 out.go:304] Setting ErrFile to fd 2...
	I0308 03:57:25.138100  949014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:57:25.138299  949014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:57:25.138923  949014 out.go:298] Setting JSON to false
	I0308 03:57:25.139912  949014 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27571,"bootTime":1709842674,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:57:25.140013  949014 start.go:139] virtualization: kvm guest
	I0308 03:57:25.142697  949014 out.go:177] * [kubernetes-upgrade-219954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:57:25.144293  949014 notify.go:220] Checking for updates...
	I0308 03:57:25.146849  949014 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:57:25.149480  949014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:57:25.151939  949014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:57:25.153256  949014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:57:25.155811  949014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:57:25.157266  949014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:57:25.158853  949014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:57:25.196792  949014 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 03:57:25.198051  949014 start.go:297] selected driver: kvm2
	I0308 03:57:25.198067  949014 start.go:901] validating driver "kvm2" against <nil>
	I0308 03:57:25.198082  949014 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:57:25.198976  949014 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:57:25.199072  949014 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 03:57:25.216490  949014 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 03:57:25.216545  949014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 03:57:25.216863  949014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 03:57:25.216953  949014 cni.go:84] Creating CNI manager for ""
	I0308 03:57:25.216975  949014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 03:57:25.216985  949014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 03:57:25.217049  949014 start.go:340] cluster config:
	{Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:57:25.217166  949014 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 03:57:25.218910  949014 out.go:177] * Starting "kubernetes-upgrade-219954" primary control-plane node in "kubernetes-upgrade-219954" cluster
	I0308 03:57:25.220299  949014 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 03:57:25.220347  949014 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 03:57:25.220361  949014 cache.go:56] Caching tarball of preloaded images
	I0308 03:57:25.220452  949014 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 03:57:25.220473  949014 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 03:57:25.220852  949014 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/config.json ...
	I0308 03:57:25.220886  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/config.json: {Name:mk13dc80a6ef8a135530be5686cd978453bf3f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:25.221045  949014 start.go:360] acquireMachinesLock for kubernetes-upgrade-219954: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 03:57:25.221085  949014 start.go:364] duration metric: took 26.076µs to acquireMachinesLock for "kubernetes-upgrade-219954"
	I0308 03:57:25.221104  949014 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 03:57:25.221243  949014 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 03:57:25.222938  949014 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 03:57:25.223125  949014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:57:25.223172  949014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:57:25.241172  949014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0308 03:57:25.241673  949014 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:57:25.242276  949014 main.go:141] libmachine: Using API Version  1
	I0308 03:57:25.242302  949014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:57:25.242667  949014 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:57:25.242899  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 03:57:25.243080  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:25.243258  949014 start.go:159] libmachine.API.Create for "kubernetes-upgrade-219954" (driver="kvm2")
	I0308 03:57:25.243288  949014 client.go:168] LocalClient.Create starting
	I0308 03:57:25.243321  949014 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 03:57:25.243359  949014 main.go:141] libmachine: Decoding PEM data...
	I0308 03:57:25.243390  949014 main.go:141] libmachine: Parsing certificate...
	I0308 03:57:25.243469  949014 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 03:57:25.243500  949014 main.go:141] libmachine: Decoding PEM data...
	I0308 03:57:25.243516  949014 main.go:141] libmachine: Parsing certificate...
	I0308 03:57:25.243539  949014 main.go:141] libmachine: Running pre-create checks...
	I0308 03:57:25.243573  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .PreCreateCheck
	I0308 03:57:25.243989  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetConfigRaw
	I0308 03:57:25.244401  949014 main.go:141] libmachine: Creating machine...
	I0308 03:57:25.244411  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .Create
	I0308 03:57:25.244611  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Creating KVM machine...
	I0308 03:57:25.245911  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found existing default KVM network
	I0308 03:57:25.246625  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:25.246487  949052 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0308 03:57:25.246678  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | created network xml: 
	I0308 03:57:25.246699  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | <network>
	I0308 03:57:25.246778  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   <name>mk-kubernetes-upgrade-219954</name>
	I0308 03:57:25.246813  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   <dns enable='no'/>
	I0308 03:57:25.246821  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   
	I0308 03:57:25.246830  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 03:57:25.246839  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |     <dhcp>
	I0308 03:57:25.246847  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 03:57:25.246862  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |     </dhcp>
	I0308 03:57:25.246870  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   </ip>
	I0308 03:57:25.246879  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG |   
	I0308 03:57:25.246887  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | </network>
	I0308 03:57:25.246897  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | 
	I0308 03:57:25.252187  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | trying to create private KVM network mk-kubernetes-upgrade-219954 192.168.39.0/24...
	I0308 03:57:25.332929  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | private KVM network mk-kubernetes-upgrade-219954 192.168.39.0/24 created
	I0308 03:57:25.332958  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954 ...
	I0308 03:57:25.333031  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 03:57:25.333091  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:25.332989  949052 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:57:25.333215  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 03:57:25.685705  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:25.685557  949052 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa...
	I0308 03:57:25.861078  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:25.860951  949052 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/kubernetes-upgrade-219954.rawdisk...
	I0308 03:57:25.861115  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Writing magic tar header
	I0308 03:57:25.861131  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Writing SSH key tar header
	I0308 03:57:25.861143  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:25.861063  949052 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954 ...
	I0308 03:57:25.861168  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954
	I0308 03:57:25.861296  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 03:57:25.861324  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:57:25.861346  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954 (perms=drwx------)
	I0308 03:57:25.861365  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 03:57:25.861380  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 03:57:25.861395  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 03:57:25.861409  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 03:57:25.861420  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 03:57:25.861434  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 03:57:25.861447  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home/jenkins
	I0308 03:57:25.861463  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Checking permissions on dir: /home
	I0308 03:57:25.861475  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Skipping /home - not owner
	I0308 03:57:25.861485  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 03:57:25.861497  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Creating domain...
	I0308 03:57:25.863254  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) define libvirt domain using xml: 
	I0308 03:57:25.863283  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) <domain type='kvm'>
	I0308 03:57:25.863301  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <name>kubernetes-upgrade-219954</name>
	I0308 03:57:25.863309  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <memory unit='MiB'>2200</memory>
	I0308 03:57:25.863318  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <vcpu>2</vcpu>
	I0308 03:57:25.863324  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <features>
	I0308 03:57:25.863336  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <acpi/>
	I0308 03:57:25.863353  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <apic/>
	I0308 03:57:25.863365  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <pae/>
	I0308 03:57:25.863383  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     
	I0308 03:57:25.863395  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   </features>
	I0308 03:57:25.863406  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <cpu mode='host-passthrough'>
	I0308 03:57:25.863418  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   
	I0308 03:57:25.863428  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   </cpu>
	I0308 03:57:25.863460  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <os>
	I0308 03:57:25.863495  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <type>hvm</type>
	I0308 03:57:25.863506  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <boot dev='cdrom'/>
	I0308 03:57:25.863516  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <boot dev='hd'/>
	I0308 03:57:25.863529  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <bootmenu enable='no'/>
	I0308 03:57:25.863540  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   </os>
	I0308 03:57:25.863551  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   <devices>
	I0308 03:57:25.863562  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <disk type='file' device='cdrom'>
	I0308 03:57:25.863586  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/boot2docker.iso'/>
	I0308 03:57:25.863604  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <target dev='hdc' bus='scsi'/>
	I0308 03:57:25.863618  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <readonly/>
	I0308 03:57:25.863626  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </disk>
	I0308 03:57:25.863639  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <disk type='file' device='disk'>
	I0308 03:57:25.863652  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 03:57:25.863670  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/kubernetes-upgrade-219954.rawdisk'/>
	I0308 03:57:25.863691  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <target dev='hda' bus='virtio'/>
	I0308 03:57:25.863701  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </disk>
	I0308 03:57:25.863709  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <interface type='network'>
	I0308 03:57:25.863722  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <source network='mk-kubernetes-upgrade-219954'/>
	I0308 03:57:25.863734  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <model type='virtio'/>
	I0308 03:57:25.863743  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </interface>
	I0308 03:57:25.863748  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <interface type='network'>
	I0308 03:57:25.863779  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <source network='default'/>
	I0308 03:57:25.863805  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <model type='virtio'/>
	I0308 03:57:25.863817  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </interface>
	I0308 03:57:25.863837  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <serial type='pty'>
	I0308 03:57:25.863847  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <target port='0'/>
	I0308 03:57:25.863857  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </serial>
	I0308 03:57:25.863866  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <console type='pty'>
	I0308 03:57:25.863874  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <target type='serial' port='0'/>
	I0308 03:57:25.863882  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </console>
	I0308 03:57:25.863893  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     <rng model='virtio'>
	I0308 03:57:25.863908  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)       <backend model='random'>/dev/random</backend>
	I0308 03:57:25.863915  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     </rng>
	I0308 03:57:25.863927  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     
	I0308 03:57:25.863938  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)     
	I0308 03:57:25.863957  949014 main.go:141] libmachine: (kubernetes-upgrade-219954)   </devices>
	I0308 03:57:25.863973  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) </domain>
	I0308 03:57:25.863987  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) 
	I0308 03:57:25.868328  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:2c:b7:da in network default
	I0308 03:57:25.869040  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:25.869081  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Ensuring networks are active...
	I0308 03:57:25.869823  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Ensuring network default is active
	I0308 03:57:25.870164  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Ensuring network mk-kubernetes-upgrade-219954 is active
	I0308 03:57:25.870787  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Getting domain xml...
	I0308 03:57:25.871546  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Creating domain...
	I0308 03:57:27.152903  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Waiting to get IP...
	I0308 03:57:27.153813  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.154194  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.154225  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:27.154163  949052 retry.go:31] will retry after 265.366823ms: waiting for machine to come up
	I0308 03:57:27.421833  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.422250  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.422283  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:27.422209  949052 retry.go:31] will retry after 381.064725ms: waiting for machine to come up
	I0308 03:57:27.804928  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.805446  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:27.805473  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:27.805402  949052 retry.go:31] will retry after 350.85496ms: waiting for machine to come up
	I0308 03:57:28.158177  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:28.158696  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:28.158757  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:28.158671  949052 retry.go:31] will retry after 508.630607ms: waiting for machine to come up
	I0308 03:57:28.669376  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:28.669805  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:28.669828  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:28.669746  949052 retry.go:31] will retry after 495.486354ms: waiting for machine to come up
	I0308 03:57:29.166519  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:29.166940  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:29.166958  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:29.166904  949052 retry.go:31] will retry after 878.889009ms: waiting for machine to come up
	I0308 03:57:30.047723  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:30.048187  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:30.048235  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:30.048146  949052 retry.go:31] will retry after 934.935512ms: waiting for machine to come up
	I0308 03:57:30.985355  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:30.985772  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:30.985803  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:30.985761  949052 retry.go:31] will retry after 1.445838933s: waiting for machine to come up
	I0308 03:57:32.432856  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:32.433255  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:32.433289  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:32.433211  949052 retry.go:31] will retry after 1.280777459s: waiting for machine to come up
	I0308 03:57:33.715437  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:33.715895  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:33.715926  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:33.715832  949052 retry.go:31] will retry after 2.012968432s: waiting for machine to come up
	I0308 03:57:35.730969  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:35.731326  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:35.731377  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:35.731289  949052 retry.go:31] will retry after 2.223540555s: waiting for machine to come up
	I0308 03:57:37.957728  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:37.958147  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:37.958183  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:37.958084  949052 retry.go:31] will retry after 2.701272836s: waiting for machine to come up
	I0308 03:57:40.661575  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:40.662004  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:40.662027  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:40.661966  949052 retry.go:31] will retry after 3.156688939s: waiting for machine to come up
	I0308 03:57:43.822222  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:43.822652  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find current IP address of domain kubernetes-upgrade-219954 in network mk-kubernetes-upgrade-219954
	I0308 03:57:43.822680  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | I0308 03:57:43.822579  949052 retry.go:31] will retry after 4.022137875s: waiting for machine to come up
	I0308 03:57:47.847485  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:47.847939  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Found IP for machine: 192.168.39.107
	I0308 03:57:47.847964  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Reserving static IP address...
	I0308 03:57:47.847974  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has current primary IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:47.848385  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-219954", mac: "52:54:00:38:5b:5a", ip: "192.168.39.107"} in network mk-kubernetes-upgrade-219954
	I0308 03:57:47.923764  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Getting to WaitForSSH function...
	I0308 03:57:47.923799  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Reserved static IP address: 192.168.39.107
	I0308 03:57:47.923814  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Waiting for SSH to be available...
	I0308 03:57:47.926490  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:47.926853  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:47.926894  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:47.926995  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Using SSH client type: external
	I0308 03:57:47.927025  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa (-rw-------)
	I0308 03:57:47.927070  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 03:57:47.927083  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | About to run SSH command:
	I0308 03:57:47.927097  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | exit 0
	I0308 03:57:48.057027  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | SSH cmd err, output: <nil>: 
	I0308 03:57:48.057296  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) KVM machine creation complete!
	I0308 03:57:48.057697  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetConfigRaw
	I0308 03:57:48.058254  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:48.058467  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:48.058633  949014 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 03:57:48.058647  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetState
	I0308 03:57:48.059899  949014 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 03:57:48.059917  949014 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 03:57:48.059923  949014 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 03:57:48.059929  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.062265  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.062553  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.062583  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.062728  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.062914  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.063095  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.063263  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.063397  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:48.063655  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:48.063672  949014 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 03:57:48.176323  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:57:48.176344  949014 main.go:141] libmachine: Detecting the provisioner...
	I0308 03:57:48.176351  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.179024  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.179371  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.179405  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.179556  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.179751  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.179874  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.180038  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.180170  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:48.180385  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:48.180400  949014 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 03:57:48.294043  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 03:57:48.294133  949014 main.go:141] libmachine: found compatible host: buildroot
	I0308 03:57:48.294142  949014 main.go:141] libmachine: Provisioning with buildroot...
	I0308 03:57:48.294150  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 03:57:48.294420  949014 buildroot.go:166] provisioning hostname "kubernetes-upgrade-219954"
	I0308 03:57:48.294452  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 03:57:48.294649  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.297022  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.297445  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.297473  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.297637  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.297787  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.297929  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.298043  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.298276  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:48.298442  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:48.298456  949014 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-219954 && echo "kubernetes-upgrade-219954" | sudo tee /etc/hostname
	I0308 03:57:48.428768  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219954
	
	I0308 03:57:48.428801  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.431458  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.431856  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.431893  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.432042  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.432255  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.432450  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.432615  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.432803  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:48.432993  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:48.433012  949014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-219954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-219954/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-219954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 03:57:48.554565  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 03:57:48.554595  949014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 03:57:48.554633  949014 buildroot.go:174] setting up certificates
	I0308 03:57:48.554648  949014 provision.go:84] configureAuth start
	I0308 03:57:48.554668  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 03:57:48.554960  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 03:57:48.557503  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.557903  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.557926  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.558074  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.560361  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.560657  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.560694  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.560877  949014 provision.go:143] copyHostCerts
	I0308 03:57:48.560936  949014 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 03:57:48.560949  949014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 03:57:48.561032  949014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 03:57:48.561155  949014 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 03:57:48.561167  949014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 03:57:48.561206  949014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 03:57:48.561313  949014 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 03:57:48.561328  949014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 03:57:48.561368  949014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 03:57:48.561458  949014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-219954 san=[127.0.0.1 192.168.39.107 kubernetes-upgrade-219954 localhost minikube]
	I0308 03:57:48.812270  949014 provision.go:177] copyRemoteCerts
	I0308 03:57:48.812335  949014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 03:57:48.812362  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.814759  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.815021  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.815046  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.815181  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.815396  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.815557  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.815718  949014 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 03:57:48.905081  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 03:57:48.930663  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0308 03:57:48.955424  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 03:57:48.980940  949014 provision.go:87] duration metric: took 426.272979ms to configureAuth
	I0308 03:57:48.980971  949014 buildroot.go:189] setting minikube options for container-runtime
	I0308 03:57:48.981244  949014 config.go:182] Loaded profile config "kubernetes-upgrade-219954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 03:57:48.981368  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:48.983791  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.984155  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:48.984197  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:48.984309  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:48.984529  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.984691  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:48.984836  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:48.985025  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:48.985191  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:48.985205  949014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 03:57:49.272455  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 03:57:49.272489  949014 main.go:141] libmachine: Checking connection to Docker...
	I0308 03:57:49.272502  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetURL
	I0308 03:57:49.273922  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | Using libvirt version 6000000
	I0308 03:57:49.276130  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.276523  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.276549  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.276728  949014 main.go:141] libmachine: Docker is up and running!
	I0308 03:57:49.276746  949014 main.go:141] libmachine: Reticulating splines...
	I0308 03:57:49.276756  949014 client.go:171] duration metric: took 24.033456218s to LocalClient.Create
	I0308 03:57:49.276792  949014 start.go:167] duration metric: took 24.033535566s to libmachine.API.Create "kubernetes-upgrade-219954"
	I0308 03:57:49.276807  949014 start.go:293] postStartSetup for "kubernetes-upgrade-219954" (driver="kvm2")
	I0308 03:57:49.276822  949014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 03:57:49.276843  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:49.277108  949014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 03:57:49.277134  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:49.279251  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.279527  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.279550  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.279685  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:49.279871  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:49.280034  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:49.280152  949014 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 03:57:49.369055  949014 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 03:57:49.373702  949014 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 03:57:49.373730  949014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 03:57:49.373800  949014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 03:57:49.373897  949014 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 03:57:49.373983  949014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 03:57:49.384769  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:57:49.410947  949014 start.go:296] duration metric: took 134.125389ms for postStartSetup
	I0308 03:57:49.410994  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetConfigRaw
	I0308 03:57:49.411636  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 03:57:49.414121  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.414510  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.414546  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.414803  949014 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/config.json ...
	I0308 03:57:49.414981  949014 start.go:128] duration metric: took 24.193727994s to createHost
	I0308 03:57:49.415003  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:49.416980  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.417313  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.417344  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.417439  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:49.417615  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:49.417774  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:49.417922  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:49.418060  949014 main.go:141] libmachine: Using SSH client type: native
	I0308 03:57:49.418242  949014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 03:57:49.418257  949014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0308 03:57:49.530090  949014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870269.505390042
	
	I0308 03:57:49.530121  949014 fix.go:216] guest clock: 1709870269.505390042
	I0308 03:57:49.530130  949014 fix.go:229] Guest: 2024-03-08 03:57:49.505390042 +0000 UTC Remote: 2024-03-08 03:57:49.41499196 +0000 UTC m=+24.338291220 (delta=90.398082ms)
	I0308 03:57:49.530155  949014 fix.go:200] guest clock delta is within tolerance: 90.398082ms
	I0308 03:57:49.530163  949014 start.go:83] releasing machines lock for "kubernetes-upgrade-219954", held for 24.309069255s
	I0308 03:57:49.530202  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:49.530495  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 03:57:49.533442  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.533805  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.533842  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.534047  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:49.534616  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:49.534830  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 03:57:49.534943  949014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 03:57:49.534986  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:49.535026  949014 ssh_runner.go:195] Run: cat /version.json
	I0308 03:57:49.535043  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 03:57:49.537701  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.538027  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.538059  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.538172  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.538304  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:49.538483  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:49.538593  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:49.538620  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:49.538621  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:49.538851  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 03:57:49.538850  949014 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 03:57:49.538998  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 03:57:49.539174  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 03:57:49.539311  949014 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 03:57:49.627709  949014 ssh_runner.go:195] Run: systemctl --version
	I0308 03:57:49.655141  949014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 03:57:49.826377  949014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 03:57:49.832940  949014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 03:57:49.833022  949014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 03:57:49.850927  949014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 03:57:49.850956  949014 start.go:494] detecting cgroup driver to use...
	I0308 03:57:49.851028  949014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 03:57:49.866855  949014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 03:57:49.882603  949014 docker.go:217] disabling cri-docker service (if available) ...
	I0308 03:57:49.882683  949014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 03:57:49.896248  949014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 03:57:49.909756  949014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 03:57:50.029722  949014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 03:57:50.175021  949014 docker.go:233] disabling docker service ...
	I0308 03:57:50.175082  949014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 03:57:50.190885  949014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 03:57:50.209420  949014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 03:57:50.344824  949014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 03:57:50.462801  949014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 03:57:50.483146  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 03:57:50.503770  949014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 03:57:50.503854  949014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:57:50.515485  949014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 03:57:50.515540  949014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:57:50.527437  949014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:57:50.539284  949014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 03:57:50.551228  949014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 03:57:50.563416  949014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 03:57:50.575469  949014 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 03:57:50.575526  949014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 03:57:50.593971  949014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 03:57:50.607476  949014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:57:50.726237  949014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 03:57:50.879692  949014 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 03:57:50.879767  949014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 03:57:50.885897  949014 start.go:562] Will wait 60s for crictl version
	I0308 03:57:50.885965  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:50.890598  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 03:57:50.929365  949014 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 03:57:50.929448  949014 ssh_runner.go:195] Run: crio --version
	I0308 03:57:50.962334  949014 ssh_runner.go:195] Run: crio --version
	I0308 03:57:50.993795  949014 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 03:57:50.995058  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 03:57:50.998291  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:50.998718  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 04:57:41 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 03:57:50.998743  949014 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 03:57:50.998948  949014 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 03:57:51.003364  949014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:57:51.019297  949014 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 03:57:51.019459  949014 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 03:57:51.019518  949014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:57:51.062731  949014 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 03:57:51.062794  949014 ssh_runner.go:195] Run: which lz4
	I0308 03:57:51.067455  949014 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0308 03:57:51.072331  949014 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 03:57:51.072363  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 03:57:53.146580  949014 crio.go:444] duration metric: took 2.0791571s to copy over tarball
	I0308 03:57:53.146686  949014 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 03:57:56.075550  949014 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.928822959s)
	I0308 03:57:56.075588  949014 crio.go:451] duration metric: took 2.928974508s to extract the tarball
	I0308 03:57:56.075599  949014 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 03:57:56.119611  949014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 03:57:56.171574  949014 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 03:57:56.171603  949014 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 03:57:56.171697  949014 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:57:56.171805  949014 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 03:57:56.171701  949014 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 03:57:56.171925  949014 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 03:57:56.171711  949014 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 03:57:56.171729  949014 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 03:57:56.172155  949014 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 03:57:56.171738  949014 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 03:57:56.173035  949014 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:57:56.173155  949014 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 03:57:56.173197  949014 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 03:57:56.173268  949014 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 03:57:56.173296  949014 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 03:57:56.173317  949014 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 03:57:56.173370  949014 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 03:57:56.173390  949014 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 03:57:56.330009  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 03:57:56.342276  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 03:57:56.343067  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 03:57:56.344206  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 03:57:56.358785  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 03:57:56.391748  949014 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 03:57:56.391805  949014 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 03:57:56.391855  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.401885  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 03:57:56.418386  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 03:57:56.471338  949014 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 03:57:56.471388  949014 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 03:57:56.471436  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.499561  949014 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 03:57:56.511167  949014 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 03:57:56.511213  949014 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 03:57:56.511268  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.511440  949014 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 03:57:56.511496  949014 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 03:57:56.511550  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.527761  949014 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 03:57:56.527809  949014 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 03:57:56.527826  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 03:57:56.527846  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.582468  949014 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 03:57:56.582545  949014 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 03:57:56.582603  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.593682  949014 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 03:57:56.593734  949014 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 03:57:56.593746  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 03:57:56.593777  949014 ssh_runner.go:195] Run: which crictl
	I0308 03:57:56.718010  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 03:57:56.718088  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 03:57:56.718151  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 03:57:56.718179  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 03:57:56.718214  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 03:57:56.718253  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 03:57:56.718261  949014 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 03:57:56.835212  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 03:57:56.836402  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 03:57:56.836441  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 03:57:56.836504  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 03:57:56.836555  949014 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 03:57:56.836604  949014 cache_images.go:92] duration metric: took 664.98351ms to LoadCachedImages
	W0308 03:57:56.836693  949014 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0308 03:57:56.836709  949014 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0308 03:57:56.836842  949014 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-219954 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 03:57:56.836923  949014 ssh_runner.go:195] Run: crio config
	I0308 03:57:56.905542  949014 cni.go:84] Creating CNI manager for ""
	I0308 03:57:56.905565  949014 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 03:57:56.905579  949014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 03:57:56.905622  949014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-219954 NodeName:kubernetes-upgrade-219954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 03:57:56.905797  949014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-219954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 03:57:56.905886  949014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 03:57:56.919164  949014 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 03:57:56.919231  949014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 03:57:56.931923  949014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0308 03:57:56.952309  949014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 03:57:56.972252  949014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0308 03:57:56.992749  949014 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0308 03:57:56.997349  949014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 03:57:57.013895  949014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 03:57:57.138462  949014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 03:57:57.158262  949014 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954 for IP: 192.168.39.107
	I0308 03:57:57.158293  949014 certs.go:194] generating shared ca certs ...
	I0308 03:57:57.158317  949014 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.158521  949014 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 03:57:57.158611  949014 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 03:57:57.158628  949014 certs.go:256] generating profile certs ...
	I0308 03:57:57.158709  949014 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key
	I0308 03:57:57.158731  949014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt with IP's: []
	I0308 03:57:57.290403  949014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt ...
	I0308 03:57:57.290437  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt: {Name:mk5292f2f3b6ab6fb0739c7da812e0455e8c3ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.290608  949014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key ...
	I0308 03:57:57.290621  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key: {Name:mkce9d2dda0a3b714baea2c7c1d43889c5dfcccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.290699  949014 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key.227e9756
	I0308 03:57:57.290716  949014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt.227e9756 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.107]
	I0308 03:57:57.492385  949014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt.227e9756 ...
	I0308 03:57:57.492415  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt.227e9756: {Name:mk20ea0230500ece935665ca5eea509f470da5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.492564  949014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key.227e9756 ...
	I0308 03:57:57.492578  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key.227e9756: {Name:mkac290285d50e565488af106619664c673fef9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.492646  949014 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt.227e9756 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt
	I0308 03:57:57.492720  949014 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key.227e9756 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key
	I0308 03:57:57.492778  949014 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key
	I0308 03:57:57.492793  949014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.crt with IP's: []
	I0308 03:57:57.611938  949014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.crt ...
	I0308 03:57:57.611975  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.crt: {Name:mkf7f18ba1f45ab2722dbc8109d97d8c5531c1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.612132  949014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key ...
	I0308 03:57:57.612147  949014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key: {Name:mkf677ca50ecf6b3eca9eb4ea95fc40e5613e350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 03:57:57.612320  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 03:57:57.612356  949014 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 03:57:57.612369  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 03:57:57.612391  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 03:57:57.612412  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 03:57:57.612433  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 03:57:57.612469  949014 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 03:57:57.613098  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 03:57:57.645538  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 03:57:57.678729  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 03:57:57.706839  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 03:57:57.734372  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0308 03:57:57.760752  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 03:57:57.788451  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 03:57:57.814270  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 03:57:57.840624  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 03:57:57.866593  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 03:57:57.896137  949014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 03:57:57.923722  949014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 03:57:57.942424  949014 ssh_runner.go:195] Run: openssl version
	I0308 03:57:57.948649  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 03:57:57.962004  949014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 03:57:57.977342  949014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 03:57:57.977408  949014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 03:57:57.985968  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 03:57:58.001032  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 03:57:58.017185  949014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 03:57:58.022992  949014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 03:57:58.023055  949014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 03:57:58.029537  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 03:57:58.043218  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 03:57:58.057142  949014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:57:58.062552  949014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:57:58.062613  949014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 03:57:58.069338  949014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 03:57:58.082871  949014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 03:57:58.089893  949014 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 03:57:58.089961  949014 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:57:58.090067  949014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 03:57:58.090125  949014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 03:57:58.131099  949014 cri.go:89] found id: ""
	I0308 03:57:58.131180  949014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 03:57:58.142924  949014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 03:57:58.154183  949014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 03:57:58.165342  949014 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 03:57:58.165361  949014 kubeadm.go:156] found existing configuration files:
	
	I0308 03:57:58.165409  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 03:57:58.177216  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 03:57:58.177264  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 03:57:58.189836  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 03:57:58.200435  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 03:57:58.200495  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 03:57:58.211763  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 03:57:58.222288  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 03:57:58.222333  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 03:57:58.234202  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 03:57:58.244602  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 03:57:58.244667  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 03:57:58.255269  949014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 03:57:58.372628  949014 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 03:57:58.372769  949014 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 03:57:58.530359  949014 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 03:57:58.530526  949014 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 03:57:58.530656  949014 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 03:57:58.719702  949014 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 03:57:58.721722  949014 out.go:204]   - Generating certificates and keys ...
	I0308 03:57:58.721847  949014 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 03:57:58.721954  949014 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 03:57:58.976340  949014 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 03:57:59.078214  949014 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 03:57:59.204992  949014 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 03:57:59.310596  949014 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 03:57:59.518040  949014 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 03:57:59.518543  949014 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	I0308 03:58:00.043590  949014 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 03:58:00.043780  949014 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	I0308 03:58:00.415428  949014 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 03:58:00.668204  949014 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 03:58:00.745734  949014 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 03:58:00.745897  949014 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 03:58:01.028355  949014 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 03:58:01.433028  949014 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 03:58:01.742231  949014 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 03:58:01.806766  949014 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 03:58:01.827037  949014 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 03:58:01.828274  949014 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 03:58:01.828364  949014 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 03:58:01.958089  949014 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 03:58:01.960240  949014 out.go:204]   - Booting up control plane ...
	I0308 03:58:01.960381  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 03:58:01.962897  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 03:58:01.972243  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 03:58:01.973421  949014 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 03:58:01.979331  949014 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 03:58:41.976061  949014 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 03:58:41.976472  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 03:58:41.976726  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 03:58:46.977585  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 03:58:46.977904  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 03:58:56.978729  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 03:58:56.979019  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 03:59:16.980997  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 03:59:16.981254  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 03:59:56.980791  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 03:59:56.981354  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 03:59:56.981396  949014 kubeadm.go:309] 
	I0308 03:59:56.981493  949014 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 03:59:56.981580  949014 kubeadm.go:309] 		timed out waiting for the condition
	I0308 03:59:56.981593  949014 kubeadm.go:309] 
	I0308 03:59:56.981682  949014 kubeadm.go:309] 	This error is likely caused by:
	I0308 03:59:56.981759  949014 kubeadm.go:309] 		- The kubelet is not running
	I0308 03:59:56.981908  949014 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 03:59:56.981918  949014 kubeadm.go:309] 
	I0308 03:59:56.982065  949014 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 03:59:56.982126  949014 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 03:59:56.982228  949014 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 03:59:56.982260  949014 kubeadm.go:309] 
	I0308 03:59:56.982502  949014 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 03:59:56.982647  949014 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 03:59:56.982661  949014 kubeadm.go:309] 
	I0308 03:59:56.982958  949014 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 03:59:56.983191  949014 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 03:59:56.983381  949014 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 03:59:56.983538  949014 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 03:59:56.983571  949014 kubeadm.go:309] 
	I0308 03:59:56.983859  949014 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 03:59:56.984045  949014 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 03:59:56.984247  949014 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 03:59:56.984420  949014 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-219954 localhost] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 03:59:56.984515  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 03:59:59.183492  949014 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.198945386s)
	I0308 03:59:59.183584  949014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:59:59.199186  949014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 03:59:59.210211  949014 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 03:59:59.210243  949014 kubeadm.go:156] found existing configuration files:
	
	I0308 03:59:59.210296  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 03:59:59.220861  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 03:59:59.220918  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 03:59:59.231950  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 03:59:59.241988  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 03:59:59.242036  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 03:59:59.252492  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 03:59:59.262435  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 03:59:59.262482  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 03:59:59.272790  949014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 03:59:59.282937  949014 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 03:59:59.282990  949014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 03:59:59.293449  949014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 03:59:59.538094  949014 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:01:55.697247  949014 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:01:55.697404  949014 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:01:55.699065  949014 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:01:55.699168  949014 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:01:55.699295  949014 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:01:55.699415  949014 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:01:55.699556  949014 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:01:55.699646  949014 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:01:55.701497  949014 out.go:204]   - Generating certificates and keys ...
	I0308 04:01:55.701588  949014 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:01:55.701685  949014 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:01:55.701798  949014 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:01:55.701884  949014 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:01:55.701983  949014 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:01:55.702079  949014 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:01:55.702137  949014 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:01:55.702214  949014 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:01:55.702312  949014 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:01:55.702428  949014 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:01:55.702483  949014 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:01:55.702552  949014 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:01:55.702616  949014 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:01:55.702690  949014 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:01:55.702780  949014 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:01:55.702859  949014 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:01:55.703009  949014 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:01:55.703119  949014 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:01:55.703181  949014 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:01:55.703278  949014 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:01:55.705568  949014 out.go:204]   - Booting up control plane ...
	I0308 04:01:55.705664  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:01:55.705747  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:01:55.705825  949014 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:01:55.705942  949014 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:01:55.706136  949014 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:01:55.706196  949014 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:01:55.706254  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:01:55.706476  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:01:55.706575  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:01:55.706841  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:01:55.706946  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:01:55.707168  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:01:55.707244  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:01:55.707412  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:01:55.707474  949014 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:01:55.707643  949014 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:01:55.707651  949014 kubeadm.go:309] 
	I0308 04:01:55.707684  949014 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:01:55.707718  949014 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:01:55.707725  949014 kubeadm.go:309] 
	I0308 04:01:55.707753  949014 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:01:55.707782  949014 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:01:55.707873  949014 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:01:55.707891  949014 kubeadm.go:309] 
	I0308 04:01:55.707982  949014 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:01:55.708012  949014 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:01:55.708049  949014 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:01:55.708058  949014 kubeadm.go:309] 
	I0308 04:01:55.708146  949014 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:01:55.708219  949014 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:01:55.708226  949014 kubeadm.go:309] 
	I0308 04:01:55.708344  949014 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:01:55.708424  949014 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:01:55.708515  949014 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:01:55.708585  949014 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:01:55.708598  949014 kubeadm.go:309] 
	I0308 04:01:55.708659  949014 kubeadm.go:393] duration metric: took 3m57.618705916s to StartCluster
	I0308 04:01:55.708722  949014 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:01:55.708784  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:01:55.758298  949014 cri.go:89] found id: ""
	I0308 04:01:55.758335  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.758352  949014 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:01:55.758358  949014 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:01:55.758417  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:01:55.798977  949014 cri.go:89] found id: ""
	I0308 04:01:55.799015  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.799028  949014 logs.go:278] No container was found matching "etcd"
	I0308 04:01:55.799036  949014 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:01:55.799109  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:01:55.834381  949014 cri.go:89] found id: ""
	I0308 04:01:55.834408  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.834421  949014 logs.go:278] No container was found matching "coredns"
	I0308 04:01:55.834428  949014 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:01:55.834492  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:01:55.872092  949014 cri.go:89] found id: ""
	I0308 04:01:55.872129  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.872138  949014 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:01:55.872144  949014 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:01:55.872199  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:01:55.911562  949014 cri.go:89] found id: ""
	I0308 04:01:55.911612  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.911625  949014 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:01:55.911632  949014 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:01:55.911700  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:01:55.952449  949014 cri.go:89] found id: ""
	I0308 04:01:55.952480  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.952494  949014 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:01:55.952502  949014 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:01:55.952562  949014 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:01:55.991444  949014 cri.go:89] found id: ""
	I0308 04:01:55.991469  949014 logs.go:276] 0 containers: []
	W0308 04:01:55.991477  949014 logs.go:278] No container was found matching "kindnet"
	I0308 04:01:55.991488  949014 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:01:55.991502  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:01:56.106763  949014 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:01:56.106790  949014 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:01:56.106804  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:01:56.207375  949014 logs.go:123] Gathering logs for container status ...
	I0308 04:01:56.207413  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:01:56.251891  949014 logs.go:123] Gathering logs for kubelet ...
	I0308 04:01:56.251924  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:01:56.307502  949014 logs.go:123] Gathering logs for dmesg ...
	I0308 04:01:56.307539  949014 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:01:56.321664  949014 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:01:56.321709  949014 out.go:239] * 
	* 
	W0308 04:01:56.321777  949014 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:01:56.321809  949014 out.go:239] * 
	* 
	W0308 04:01:56.322721  949014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:01:56.326328  949014 out.go:177] 
	W0308 04:01:56.327705  949014 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:01:56.327772  949014 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:01:56.327805  949014 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:01:56.329356  949014 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-219954
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-219954: (2.581218513s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-219954 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-219954 status --format={{.Host}}: exit status 7 (85.97444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.931324585s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-219954 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.592471ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-219954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-219954
	    minikube start -p kubernetes-upgrade-219954 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2199542 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-219954 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-219954 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.707849764s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-08 04:04:02.850582489 +0000 UTC m=+4095.889491530
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-219954 -n kubernetes-upgrade-219954
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-219954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-219954 logs -n 25: (2.079546163s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo docker                         | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo cat                            | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo                                | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo find                           | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-678320 sudo crio                           | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-678320                                     | cilium-678320            | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC | 08 Mar 24 04:03 UTC |
	| start   | -p force-systemd-env-292856                          | force-systemd-env-292856 | jenkins | v1.32.0 | 08 Mar 24 04:03 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| ssh     | cert-options-576568 ssh                              | cert-options-576568      | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	|         | openssl x509 -text -noout -in                        |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                          |         |         |                     |                     |
	| ssh     | -p cert-options-576568 -- sudo                       | cert-options-576568      | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                          |         |         |                     |                     |
	| delete  | -p cert-options-576568                               | cert-options-576568      | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:03:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:03:26.720927  955924 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:03:26.721544  955924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:03:26.721563  955924 out.go:304] Setting ErrFile to fd 2...
	I0308 04:03:26.721570  955924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:03:26.721995  955924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:03:26.722938  955924 out.go:298] Setting JSON to false
	I0308 04:03:26.724403  955924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27933,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:03:26.724471  955924 start.go:139] virtualization: kvm guest
	I0308 04:03:26.726420  955924 out.go:177] * [force-systemd-env-292856] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:03:26.728295  955924 notify.go:220] Checking for updates...
	I0308 04:03:26.728324  955924 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:03:26.729965  955924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:03:26.731393  955924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:03:26.732762  955924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:03:26.734223  955924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:03:26.735759  955924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0308 04:03:26.737675  955924 config.go:182] Loaded profile config "cert-expiration-401581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:03:26.737825  955924 config.go:182] Loaded profile config "cert-options-576568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:03:26.737986  955924 config.go:182] Loaded profile config "kubernetes-upgrade-219954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:03:26.738129  955924 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:03:26.775685  955924 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:03:26.777054  955924 start.go:297] selected driver: kvm2
	I0308 04:03:26.777076  955924 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:03:26.777093  955924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:03:26.777905  955924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:03:26.778016  955924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:03:26.794735  955924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:03:26.794815  955924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 04:03:26.795085  955924 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 04:03:26.795160  955924 cni.go:84] Creating CNI manager for ""
	I0308 04:03:26.795174  955924 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:03:26.795181  955924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 04:03:26.795237  955924 start.go:340] cluster config:
	{Name:force-systemd-env-292856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-292856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:03:26.795334  955924 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:03:26.798125  955924 out.go:177] * Starting "force-systemd-env-292856" primary control-plane node in "force-systemd-env-292856" cluster
	I0308 04:03:24.596945  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:24.597467  953160 main.go:141] libmachine: (cert-options-576568) DBG | unable to find current IP address of domain cert-options-576568 in network mk-cert-options-576568
	I0308 04:03:24.597490  953160 main.go:141] libmachine: (cert-options-576568) DBG | I0308 04:03:24.597409  953634 retry.go:31] will retry after 3.59430362s: waiting for machine to come up
	I0308 04:03:28.193977  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:28.194473  953160 main.go:141] libmachine: (cert-options-576568) DBG | unable to find current IP address of domain cert-options-576568 in network mk-cert-options-576568
	I0308 04:03:28.194491  953160 main.go:141] libmachine: (cert-options-576568) DBG | I0308 04:03:28.194424  953634 retry.go:31] will retry after 2.93347041s: waiting for machine to come up
	I0308 04:03:26.799600  955924 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:03:26.799675  955924 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 04:03:26.799697  955924 cache.go:56] Caching tarball of preloaded images
	I0308 04:03:26.799833  955924 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:03:26.799852  955924 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 04:03:26.799986  955924 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/force-systemd-env-292856/config.json ...
	I0308 04:03:26.800018  955924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/force-systemd-env-292856/config.json: {Name:mk8a602bdf0ce0022da34b418c84bf6c1e98aff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:26.800206  955924 start.go:360] acquireMachinesLock for force-systemd-env-292856: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:03:31.130642  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:31.131083  953160 main.go:141] libmachine: (cert-options-576568) DBG | unable to find current IP address of domain cert-options-576568 in network mk-cert-options-576568
	I0308 04:03:31.131099  953160 main.go:141] libmachine: (cert-options-576568) DBG | I0308 04:03:31.131039  953634 retry.go:31] will retry after 5.04938684s: waiting for machine to come up
	I0308 04:03:37.746848  953583 start.go:364] duration metric: took 28.445734682s to acquireMachinesLock for "kubernetes-upgrade-219954"
	I0308 04:03:37.746900  953583 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:03:37.746908  953583 fix.go:54] fixHost starting: 
	I0308 04:03:37.747339  953583 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:03:37.747393  953583 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:03:37.765010  953583 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0308 04:03:37.765542  953583 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:03:37.766118  953583 main.go:141] libmachine: Using API Version  1
	I0308 04:03:37.766147  953583 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:03:37.766452  953583 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:03:37.766653  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:37.766810  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetState
	I0308 04:03:37.768625  953583 fix.go:112] recreateIfNeeded on kubernetes-upgrade-219954: state=Running err=<nil>
	W0308 04:03:37.768657  953583 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:03:37.770823  953583 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-219954" VM ...
	I0308 04:03:36.183160  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.183730  953160 main.go:141] libmachine: (cert-options-576568) Found IP for machine: 192.168.72.25
	I0308 04:03:36.183766  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has current primary IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.183771  953160 main.go:141] libmachine: (cert-options-576568) Reserving static IP address...
	I0308 04:03:36.184086  953160 main.go:141] libmachine: (cert-options-576568) DBG | unable to find host DHCP lease matching {name: "cert-options-576568", mac: "52:54:00:77:56:58", ip: "192.168.72.25"} in network mk-cert-options-576568
	I0308 04:03:36.265525  953160 main.go:141] libmachine: (cert-options-576568) DBG | Getting to WaitForSSH function...
	I0308 04:03:36.265549  953160 main.go:141] libmachine: (cert-options-576568) Reserved static IP address: 192.168.72.25
	I0308 04:03:36.265562  953160 main.go:141] libmachine: (cert-options-576568) Waiting for SSH to be available...
	I0308 04:03:36.268266  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.268850  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.268869  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.269085  953160 main.go:141] libmachine: (cert-options-576568) DBG | Using SSH client type: external
	I0308 04:03:36.269110  953160 main.go:141] libmachine: (cert-options-576568) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa (-rw-------)
	I0308 04:03:36.269160  953160 main.go:141] libmachine: (cert-options-576568) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:03:36.269174  953160 main.go:141] libmachine: (cert-options-576568) DBG | About to run SSH command:
	I0308 04:03:36.269207  953160 main.go:141] libmachine: (cert-options-576568) DBG | exit 0
	I0308 04:03:36.393641  953160 main.go:141] libmachine: (cert-options-576568) DBG | SSH cmd err, output: <nil>: 
	I0308 04:03:36.393956  953160 main.go:141] libmachine: (cert-options-576568) KVM machine creation complete!
	I0308 04:03:36.394259  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetConfigRaw
	I0308 04:03:36.394807  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:36.395048  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:36.395192  953160 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 04:03:36.395201  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetState
	I0308 04:03:36.396585  953160 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 04:03:36.396601  953160 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 04:03:36.396606  953160 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 04:03:36.396611  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:36.398843  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.399196  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.399222  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.399355  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:36.399530  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.399673  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.399791  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:36.399946  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:36.400164  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:36.400172  953160 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 04:03:36.501224  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:03:36.501241  953160 main.go:141] libmachine: Detecting the provisioner...
	I0308 04:03:36.501249  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:36.504686  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.505139  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.505164  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.505375  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:36.505602  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.505793  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.505912  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:36.506083  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:36.506266  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:36.506271  953160 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 04:03:36.610393  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 04:03:36.610487  953160 main.go:141] libmachine: found compatible host: buildroot
	I0308 04:03:36.610492  953160 main.go:141] libmachine: Provisioning with buildroot...
	I0308 04:03:36.610500  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetMachineName
	I0308 04:03:36.610764  953160 buildroot.go:166] provisioning hostname "cert-options-576568"
	I0308 04:03:36.610785  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetMachineName
	I0308 04:03:36.610944  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:36.613699  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.614021  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.614039  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.614212  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:36.614403  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.614555  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.614725  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:36.614879  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:36.615055  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:36.615062  953160 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-576568 && echo "cert-options-576568" | sudo tee /etc/hostname
	I0308 04:03:36.736220  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-576568
	
	I0308 04:03:36.736243  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:36.739591  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.740123  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.740145  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.740328  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:36.740548  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.740709  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:36.740848  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:36.740991  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:36.741172  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:36.741183  953160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-576568' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-576568/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-576568' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:03:36.852962  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:03:36.852990  953160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:03:36.853010  953160 buildroot.go:174] setting up certificates
	I0308 04:03:36.853021  953160 provision.go:84] configureAuth start
	I0308 04:03:36.853029  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetMachineName
	I0308 04:03:36.853447  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetIP
	I0308 04:03:36.856540  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.857090  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.857112  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.857334  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:36.859340  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.859741  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:36.859755  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:36.859902  953160 provision.go:143] copyHostCerts
	I0308 04:03:36.859970  953160 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:03:36.859982  953160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:03:36.860032  953160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:03:36.860105  953160 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:03:36.860108  953160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:03:36.860124  953160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:03:36.860183  953160 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:03:36.860186  953160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:03:36.860201  953160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:03:36.860241  953160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.cert-options-576568 san=[127.0.0.1 192.168.72.25 cert-options-576568 localhost minikube]
	I0308 04:03:37.044090  953160 provision.go:177] copyRemoteCerts
	I0308 04:03:37.044138  953160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:03:37.044174  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.047133  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.047532  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.047556  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.047785  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.047985  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.048117  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.048266  953160 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa Username:docker}
	I0308 04:03:37.129856  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:03:37.157874  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:03:37.185073  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:03:37.211432  953160 provision.go:87] duration metric: took 358.395622ms to configureAuth
	I0308 04:03:37.211453  953160 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:03:37.211637  953160 config.go:182] Loaded profile config "cert-options-576568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:03:37.211706  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.214892  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.215253  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.215317  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.215434  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.215645  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.215808  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.215945  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.216100  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:37.216306  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:37.216316  953160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:03:37.494506  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:03:37.494528  953160 main.go:141] libmachine: Checking connection to Docker...
	I0308 04:03:37.494538  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetURL
	I0308 04:03:37.495892  953160 main.go:141] libmachine: (cert-options-576568) DBG | Using libvirt version 6000000
	I0308 04:03:37.498121  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.498536  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.498560  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.498720  953160 main.go:141] libmachine: Docker is up and running!
	I0308 04:03:37.498731  953160 main.go:141] libmachine: Reticulating splines...
	I0308 04:03:37.498739  953160 client.go:171] duration metric: took 25.258316878s to LocalClient.Create
	I0308 04:03:37.498765  953160 start.go:167] duration metric: took 25.258385236s to libmachine.API.Create "cert-options-576568"
	I0308 04:03:37.498772  953160 start.go:293] postStartSetup for "cert-options-576568" (driver="kvm2")
	I0308 04:03:37.498781  953160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:03:37.498795  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:37.499058  953160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:03:37.499074  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.501376  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.501690  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.501706  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.501960  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.502123  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.502296  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.502403  953160 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa Username:docker}
	I0308 04:03:37.589985  953160 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:03:37.594876  953160 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:03:37.594896  953160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:03:37.594960  953160 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:03:37.595023  953160 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:03:37.595119  953160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:03:37.607508  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:03:37.634868  953160 start.go:296] duration metric: took 136.081273ms for postStartSetup
	I0308 04:03:37.634913  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetConfigRaw
	I0308 04:03:37.635508  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetIP
	I0308 04:03:37.638734  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.639149  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.639174  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.639505  953160 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/config.json ...
	I0308 04:03:37.639737  953160 start.go:128] duration metric: took 25.421081828s to createHost
	I0308 04:03:37.639758  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.642066  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.642356  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.642378  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.642514  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.642693  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.642870  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.643037  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.643211  953160 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:37.643381  953160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I0308 04:03:37.643386  953160 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:03:37.746700  953160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870617.694641264
	
	I0308 04:03:37.746715  953160 fix.go:216] guest clock: 1709870617.694641264
	I0308 04:03:37.746724  953160 fix.go:229] Guest: 2024-03-08 04:03:37.694641264 +0000 UTC Remote: 2024-03-08 04:03:37.63974543 +0000 UTC m=+74.010273022 (delta=54.895834ms)
	I0308 04:03:37.746751  953160 fix.go:200] guest clock delta is within tolerance: 54.895834ms
	I0308 04:03:37.746762  953160 start.go:83] releasing machines lock for "cert-options-576568", held for 25.528284514s
	I0308 04:03:37.746793  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:37.747116  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetIP
	I0308 04:03:37.750141  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.750575  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.750596  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.750792  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:37.751357  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:37.751558  953160 main.go:141] libmachine: (cert-options-576568) Calling .DriverName
	I0308 04:03:37.751670  953160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:03:37.751709  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.751786  953160 ssh_runner.go:195] Run: cat /version.json
	I0308 04:03:37.751819  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHHostname
	I0308 04:03:37.754702  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.754722  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.755076  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.755098  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.755124  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:37.755134  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:37.755270  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.755401  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHPort
	I0308 04:03:37.755467  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.755610  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHKeyPath
	I0308 04:03:37.755693  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.755755  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetSSHUsername
	I0308 04:03:37.755831  953160 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa Username:docker}
	I0308 04:03:37.755912  953160 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/cert-options-576568/id_rsa Username:docker}
	I0308 04:03:37.865546  953160 ssh_runner.go:195] Run: systemctl --version
	I0308 04:03:37.872204  953160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:03:38.043007  953160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:03:38.051009  953160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:03:38.051071  953160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:03:38.071052  953160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:03:38.071069  953160 start.go:494] detecting cgroup driver to use...
	I0308 04:03:38.071143  953160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:03:38.092990  953160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:03:38.110018  953160 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:03:38.110071  953160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:03:38.127077  953160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:03:38.143436  953160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:03:38.270124  953160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:03:38.419543  953160 docker.go:233] disabling docker service ...
	I0308 04:03:38.419592  953160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:03:38.435878  953160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:03:38.450472  953160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:03:38.607550  953160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:03:37.771994  953583 machine.go:94] provisionDockerMachine start ...
	I0308 04:03:37.772023  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:37.772234  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:37.775186  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:37.775612  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:37.775665  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:37.775817  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:37.775998  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:37.776199  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:37.776334  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:37.776519  953583 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:37.776782  953583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 04:03:37.776801  953583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:03:37.898706  953583 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219954
	
	I0308 04:03:37.898744  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 04:03:37.899118  953583 buildroot.go:166] provisioning hostname "kubernetes-upgrade-219954"
	I0308 04:03:37.899145  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 04:03:37.899361  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:37.902509  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:37.902926  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:37.902954  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:37.903138  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:37.903380  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:37.903552  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:37.903769  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:37.903969  953583 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:37.904171  953583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 04:03:37.904187  953583 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-219954 && echo "kubernetes-upgrade-219954" | sudo tee /etc/hostname
	I0308 04:03:38.040055  953583 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-219954
	
	I0308 04:03:38.040099  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:38.043119  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.043572  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:38.043608  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.043771  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:38.044025  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:38.044314  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:38.044495  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:38.044691  953583 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:38.044902  953583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 04:03:38.044920  953583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-219954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-219954/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-219954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:03:38.167734  953583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:03:38.167770  953583 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:03:38.167808  953583 buildroot.go:174] setting up certificates
	I0308 04:03:38.167828  953583 provision.go:84] configureAuth start
	I0308 04:03:38.167847  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetMachineName
	I0308 04:03:38.168223  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 04:03:38.171109  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.171504  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:38.171530  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.171751  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:38.174541  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.174956  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:38.174983  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.175193  953583 provision.go:143] copyHostCerts
	I0308 04:03:38.175264  953583 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:03:38.175278  953583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:03:38.175348  953583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:03:38.175481  953583 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:03:38.175493  953583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:03:38.175521  953583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:03:38.175624  953583 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:03:38.175635  953583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:03:38.175668  953583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:03:38.175750  953583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-219954 san=[127.0.0.1 192.168.39.107 kubernetes-upgrade-219954 localhost minikube]
	I0308 04:03:38.410720  953583 provision.go:177] copyRemoteCerts
	I0308 04:03:38.410785  953583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:03:38.410813  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:38.414192  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.414543  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:38.414570  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.414782  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:38.415050  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:38.415263  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:38.415441  953583 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 04:03:38.514438  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:03:38.545535  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0308 04:03:38.574969  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:03:38.603653  953583 provision.go:87] duration metric: took 435.801177ms to configureAuth
	I0308 04:03:38.603689  953583 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:03:38.603931  953583 config.go:182] Loaded profile config "kubernetes-upgrade-219954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:03:38.604070  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:38.607217  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.607665  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:38.607699  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:38.607884  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:38.608094  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:38.608290  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:38.608453  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:38.608730  953583 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:38.608933  953583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 04:03:38.608955  953583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:03:38.745471  953160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:03:38.761533  953160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:03:38.785478  953160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:03:38.785538  953160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:38.799485  953160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:03:38.799562  953160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:38.812106  953160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:38.827883  953160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:38.840809  953160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:03:38.854193  953160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:03:38.865361  953160 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:03:38.865454  953160 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:03:38.881554  953160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:03:38.893009  953160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:03:39.018752  953160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:03:39.172547  953160 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:03:39.172617  953160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:03:39.178225  953160 start.go:562] Will wait 60s for crictl version
	I0308 04:03:39.178282  953160 ssh_runner.go:195] Run: which crictl
	I0308 04:03:39.182669  953160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:03:39.219914  953160 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:03:39.219985  953160 ssh_runner.go:195] Run: crio --version
	I0308 04:03:39.252549  953160 ssh_runner.go:195] Run: crio --version
	I0308 04:03:39.291818  953160 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:03:39.293188  953160 main.go:141] libmachine: (cert-options-576568) Calling .GetIP
	I0308 04:03:39.295960  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:39.296306  953160 main.go:141] libmachine: (cert-options-576568) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:58", ip: ""} in network mk-cert-options-576568: {Iface:virbr2 ExpiryTime:2024-03-08 05:03:29 +0000 UTC Type:0 Mac:52:54:00:77:56:58 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:cert-options-576568 Clientid:01:52:54:00:77:56:58}
	I0308 04:03:39.296326  953160 main.go:141] libmachine: (cert-options-576568) DBG | domain cert-options-576568 has defined IP address 192.168.72.25 and MAC address 52:54:00:77:56:58 in network mk-cert-options-576568
	I0308 04:03:39.296553  953160 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:03:39.301629  953160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:03:39.315672  953160 kubeadm.go:877] updating cluster {Name:cert-options-576568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:cert-options-576568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:03:39.315782  953160 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:03:39.315820  953160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:03:39.359480  953160 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:03:39.359552  953160 ssh_runner.go:195] Run: which lz4
	I0308 04:03:39.364371  953160 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:03:39.369329  953160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:03:39.369353  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:03:41.265699  953160 crio.go:444] duration metric: took 1.901353218s to copy over tarball
	I0308 04:03:41.265766  953160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:03:46.607157  955924 start.go:364] duration metric: took 19.806890955s to acquireMachinesLock for "force-systemd-env-292856"
	I0308 04:03:46.607231  955924 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-292856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-292856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:03:46.607381  955924 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 04:03:46.610396  955924 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0308 04:03:46.610617  955924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:03:46.610682  955924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:03:46.628380  955924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0308 04:03:46.628814  955924 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:03:46.629420  955924 main.go:141] libmachine: Using API Version  1
	I0308 04:03:46.629443  955924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:03:46.629784  955924 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:03:46.630198  955924 main.go:141] libmachine: (force-systemd-env-292856) Calling .GetMachineName
	I0308 04:03:46.630405  955924 main.go:141] libmachine: (force-systemd-env-292856) Calling .DriverName
	I0308 04:03:46.630622  955924 start.go:159] libmachine.API.Create for "force-systemd-env-292856" (driver="kvm2")
	I0308 04:03:46.630660  955924 client.go:168] LocalClient.Create starting
	I0308 04:03:46.630699  955924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 04:03:46.630742  955924 main.go:141] libmachine: Decoding PEM data...
	I0308 04:03:46.630775  955924 main.go:141] libmachine: Parsing certificate...
	I0308 04:03:46.630864  955924 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 04:03:46.630895  955924 main.go:141] libmachine: Decoding PEM data...
	I0308 04:03:46.630919  955924 main.go:141] libmachine: Parsing certificate...
	I0308 04:03:46.630953  955924 main.go:141] libmachine: Running pre-create checks...
	I0308 04:03:46.630962  955924 main.go:141] libmachine: (force-systemd-env-292856) Calling .PreCreateCheck
	I0308 04:03:46.631436  955924 main.go:141] libmachine: (force-systemd-env-292856) Calling .GetConfigRaw
	I0308 04:03:46.631887  955924 main.go:141] libmachine: Creating machine...
	I0308 04:03:46.631904  955924 main.go:141] libmachine: (force-systemd-env-292856) Calling .Create
	I0308 04:03:46.632057  955924 main.go:141] libmachine: (force-systemd-env-292856) Creating KVM machine...
	I0308 04:03:46.633414  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | found existing default KVM network
	I0308 04:03:46.635272  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:46.635088  956057 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f2:08:ed} reservation:<nil>}
	I0308 04:03:46.636752  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:46.636665  956057 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280b70}
	I0308 04:03:46.636768  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | created network xml: 
	I0308 04:03:46.636776  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | <network>
	I0308 04:03:46.636782  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   <name>mk-force-systemd-env-292856</name>
	I0308 04:03:46.636788  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   <dns enable='no'/>
	I0308 04:03:46.636806  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   
	I0308 04:03:46.636815  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0308 04:03:46.636821  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |     <dhcp>
	I0308 04:03:46.636832  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0308 04:03:46.636848  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |     </dhcp>
	I0308 04:03:46.636856  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   </ip>
	I0308 04:03:46.636868  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG |   
	I0308 04:03:46.636881  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | </network>
	I0308 04:03:46.636891  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | 
	I0308 04:03:46.642533  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | trying to create private KVM network mk-force-systemd-env-292856 192.168.50.0/24...
	I0308 04:03:44.052852  953160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.787048603s)
	I0308 04:03:44.052874  953160 crio.go:451] duration metric: took 2.787153686s to extract the tarball
	I0308 04:03:44.052880  953160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:03:44.096095  953160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:03:44.165705  953160 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:03:44.165723  953160 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:03:44.165732  953160 kubeadm.go:928] updating node { 192.168.72.25 8555 v1.28.4 crio true true} ...
	I0308 04:03:44.165933  953160 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-576568 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-options-576568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:03:44.166037  953160 ssh_runner.go:195] Run: crio config
	I0308 04:03:44.227102  953160 cni.go:84] Creating CNI manager for ""
	I0308 04:03:44.227118  953160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:03:44.227131  953160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:03:44.227164  953160 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8555 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-576568 NodeName:cert-options-576568 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:03:44.227395  953160 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-576568"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:03:44.227479  953160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:03:44.244968  953160 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:03:44.245042  953160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:03:44.257190  953160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:03:44.278531  953160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:03:44.297599  953160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0308 04:03:44.316758  953160 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I0308 04:03:44.321318  953160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:03:44.338823  953160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:03:44.498218  953160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:03:44.518604  953160 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568 for IP: 192.168.72.25
	I0308 04:03:44.518617  953160 certs.go:194] generating shared ca certs ...
	I0308 04:03:44.518632  953160 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.518782  953160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:03:44.518812  953160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:03:44.518817  953160 certs.go:256] generating profile certs ...
	I0308 04:03:44.518900  953160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.key
	I0308 04:03:44.518909  953160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.crt with IP's: []
	I0308 04:03:44.617067  953160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.crt ...
	I0308 04:03:44.617087  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.crt: {Name:mkf378912186a0fbce49ef6c3eb0d398e5c665df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.617268  953160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.key ...
	I0308 04:03:44.617296  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/client.key: {Name:mkb83c1ff764c6eec0e150c8424337765dd16243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.617397  953160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key.d0da2104
	I0308 04:03:44.617409  953160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt.d0da2104 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.25]
	I0308 04:03:44.822427  953160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt.d0da2104 ...
	I0308 04:03:44.822446  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt.d0da2104: {Name:mk4b775abc7bc304020dac6ec105f8034847824b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.822642  953160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key.d0da2104 ...
	I0308 04:03:44.822655  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key.d0da2104: {Name:mk4e300052fd3446d6a125048be62e1263cd82ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.822756  953160 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt.d0da2104 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt
	I0308 04:03:44.822921  953160 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key.d0da2104 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key
	I0308 04:03:44.822982  953160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.key
	I0308 04:03:44.822994  953160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.crt with IP's: []
	I0308 04:03:44.904321  953160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.crt ...
	I0308 04:03:44.904341  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.crt: {Name:mka2d47717add4b1d4e303312400e9cac2fab26d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.912314  953160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.key ...
	I0308 04:03:44.912344  953160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.key: {Name:mkf0c3f59994079f061afd2bfa89ef30f7cdd031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:44.912623  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:03:44.912667  953160 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:03:44.912676  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:03:44.912704  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:03:44.912731  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:03:44.912757  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:03:44.912812  953160 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:03:44.913672  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:03:44.947945  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:03:44.975467  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:03:45.002014  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:03:45.032159  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0308 04:03:45.062007  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:03:45.097330  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:03:45.134345  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-options-576568/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:03:45.165181  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:03:45.195468  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:03:45.224804  953160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:03:45.253860  953160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:03:45.275237  953160 ssh_runner.go:195] Run: openssl version
	I0308 04:03:45.282740  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:03:45.296065  953160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:03:45.301534  953160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:03:45.301585  953160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:03:45.308100  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:03:45.323918  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:03:45.349829  953160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:45.357944  953160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:45.358025  953160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:45.367926  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:03:45.386865  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:03:45.407748  953160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:03:45.414517  953160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:03:45.414578  953160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:03:45.425395  953160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:03:45.438678  953160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:03:45.443731  953160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 04:03:45.443795  953160 kubeadm.go:391] StartCluster: {Name:cert-options-576568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:cert-options-576568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:03:45.443880  953160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:03:45.443926  953160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:03:45.487626  953160 cri.go:89] found id: ""
	I0308 04:03:45.487715  953160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 04:03:45.499718  953160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:03:45.511804  953160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:03:45.526036  953160 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:03:45.526047  953160 kubeadm.go:156] found existing configuration files:
	
	I0308 04:03:45.526091  953160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0308 04:03:45.537201  953160 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:03:45.537264  953160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:03:45.548835  953160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0308 04:03:45.559868  953160 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:03:45.559927  953160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:03:45.571261  953160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0308 04:03:45.582810  953160 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:03:45.582869  953160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:03:45.595423  953160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0308 04:03:45.609004  953160 kubeadm.go:162] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:03:45.609050  953160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:03:45.622363  953160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:03:45.886942  953160 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:03:46.324457  953583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:03:46.324505  953583 machine.go:97] duration metric: took 8.552485008s to provisionDockerMachine
	I0308 04:03:46.324522  953583 start.go:293] postStartSetup for "kubernetes-upgrade-219954" (driver="kvm2")
	I0308 04:03:46.324540  953583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:03:46.324587  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:46.325000  953583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:03:46.325041  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:46.328541  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.329186  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:46.329221  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.329465  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:46.329689  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:46.329934  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:46.330128  953583 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 04:03:46.426025  953583 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:03:46.433290  953583 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:03:46.433328  953583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:03:46.433421  953583 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:03:46.433518  953583 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:03:46.433635  953583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:03:46.448321  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:03:46.480109  953583 start.go:296] duration metric: took 155.567914ms for postStartSetup
	I0308 04:03:46.480161  953583 fix.go:56] duration metric: took 8.733253612s for fixHost
	I0308 04:03:46.480186  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:46.483138  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.483568  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:46.483603  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.483795  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:46.484081  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:46.484251  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:46.484449  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:46.484678  953583 main.go:141] libmachine: Using SSH client type: native
	I0308 04:03:46.484926  953583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0308 04:03:46.484938  953583 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:03:46.606952  953583 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870626.601668754
	
	I0308 04:03:46.606979  953583 fix.go:216] guest clock: 1709870626.601668754
	I0308 04:03:46.606991  953583 fix.go:229] Guest: 2024-03-08 04:03:46.601668754 +0000 UTC Remote: 2024-03-08 04:03:46.480166006 +0000 UTC m=+37.334214908 (delta=121.502748ms)
	I0308 04:03:46.607047  953583 fix.go:200] guest clock delta is within tolerance: 121.502748ms
	I0308 04:03:46.607055  953583 start.go:83] releasing machines lock for "kubernetes-upgrade-219954", held for 8.86017353s
	I0308 04:03:46.607085  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:46.607371  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 04:03:46.610361  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.610800  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:46.610827  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.611006  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:46.611672  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:46.611877  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .DriverName
	I0308 04:03:46.612011  953583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:03:46.612074  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:46.612357  953583 ssh_runner.go:195] Run: cat /version.json
	I0308 04:03:46.612381  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHHostname
	I0308 04:03:46.615063  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.615407  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.615440  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:46.615468  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.615597  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:46.615760  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:46.615834  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:46.615855  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:46.615946  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:46.616121  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHPort
	I0308 04:03:46.616133  953583 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 04:03:46.616307  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHKeyPath
	I0308 04:03:46.616472  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetSSHUsername
	I0308 04:03:46.616650  953583 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/kubernetes-upgrade-219954/id_rsa Username:docker}
	I0308 04:03:46.708409  953583 ssh_runner.go:195] Run: systemctl --version
	I0308 04:03:46.735100  953583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:03:46.900735  953583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:03:46.909145  953583 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:03:46.909236  953583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:03:46.926224  953583 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 04:03:46.926259  953583 start.go:494] detecting cgroup driver to use...
	I0308 04:03:46.926339  953583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:03:46.950566  953583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:03:46.971907  953583 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:03:46.971977  953583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:03:46.989485  953583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:03:47.005782  953583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:03:47.221165  953583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:03:47.544421  953583 docker.go:233] disabling docker service ...
	I0308 04:03:47.544515  953583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:03:47.625044  953583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:03:47.659180  953583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:03:48.000238  953583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:03:48.296466  953583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:03:48.324342  953583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:03:48.509489  953583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:03:48.509596  953583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:48.588044  953583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:03:48.588127  953583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:48.627134  953583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:48.709333  953583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:03:48.787155  953583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:03:48.827131  953583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:03:48.858658  953583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:03:48.880685  953583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:03:49.177248  953583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:03:50.256006  953583 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.078708186s)
	I0308 04:03:50.256047  953583 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:03:50.256111  953583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:03:50.264482  953583 start.go:562] Will wait 60s for crictl version
	I0308 04:03:50.264564  953583 ssh_runner.go:195] Run: which crictl
	I0308 04:03:50.270562  953583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:03:50.328711  953583 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:03:50.328846  953583 ssh_runner.go:195] Run: crio --version
	I0308 04:03:50.371182  953583 ssh_runner.go:195] Run: crio --version
	I0308 04:03:50.414529  953583 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:03:46.727485  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | private KVM network mk-force-systemd-env-292856 192.168.50.0/24 created
	I0308 04:03:46.727520  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856 ...
	I0308 04:03:46.727549  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:46.727468  956057 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:03:46.727567  955924 main.go:141] libmachine: (force-systemd-env-292856) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 04:03:46.727679  955924 main.go:141] libmachine: (force-systemd-env-292856) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 04:03:46.973473  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:46.973355  956057 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856/id_rsa...
	I0308 04:03:47.109683  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:47.109539  956057 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856/force-systemd-env-292856.rawdisk...
	I0308 04:03:47.109715  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Writing magic tar header
	I0308 04:03:47.109734  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Writing SSH key tar header
	I0308 04:03:47.109748  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:47.109657  956057 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856 ...
	I0308 04:03:47.109766  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856
	I0308 04:03:47.109785  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 04:03:47.109818  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856 (perms=drwx------)
	I0308 04:03:47.109835  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:03:47.109850  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 04:03:47.109868  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 04:03:47.109878  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 04:03:47.109884  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home/jenkins
	I0308 04:03:47.109894  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 04:03:47.109902  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Checking permissions on dir: /home
	I0308 04:03:47.109917  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 04:03:47.109929  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | Skipping /home - not owner
	I0308 04:03:47.109944  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 04:03:47.109957  955924 main.go:141] libmachine: (force-systemd-env-292856) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 04:03:47.109970  955924 main.go:141] libmachine: (force-systemd-env-292856) Creating domain...
	I0308 04:03:47.111118  955924 main.go:141] libmachine: (force-systemd-env-292856) define libvirt domain using xml: 
	I0308 04:03:47.111145  955924 main.go:141] libmachine: (force-systemd-env-292856) <domain type='kvm'>
	I0308 04:03:47.111186  955924 main.go:141] libmachine: (force-systemd-env-292856)   <name>force-systemd-env-292856</name>
	I0308 04:03:47.111239  955924 main.go:141] libmachine: (force-systemd-env-292856)   <memory unit='MiB'>2048</memory>
	I0308 04:03:47.111256  955924 main.go:141] libmachine: (force-systemd-env-292856)   <vcpu>2</vcpu>
	I0308 04:03:47.111265  955924 main.go:141] libmachine: (force-systemd-env-292856)   <features>
	I0308 04:03:47.111278  955924 main.go:141] libmachine: (force-systemd-env-292856)     <acpi/>
	I0308 04:03:47.111289  955924 main.go:141] libmachine: (force-systemd-env-292856)     <apic/>
	I0308 04:03:47.111319  955924 main.go:141] libmachine: (force-systemd-env-292856)     <pae/>
	I0308 04:03:47.111335  955924 main.go:141] libmachine: (force-systemd-env-292856)     
	I0308 04:03:47.111346  955924 main.go:141] libmachine: (force-systemd-env-292856)   </features>
	I0308 04:03:47.111353  955924 main.go:141] libmachine: (force-systemd-env-292856)   <cpu mode='host-passthrough'>
	I0308 04:03:47.111361  955924 main.go:141] libmachine: (force-systemd-env-292856)   
	I0308 04:03:47.111368  955924 main.go:141] libmachine: (force-systemd-env-292856)   </cpu>
	I0308 04:03:47.111375  955924 main.go:141] libmachine: (force-systemd-env-292856)   <os>
	I0308 04:03:47.111382  955924 main.go:141] libmachine: (force-systemd-env-292856)     <type>hvm</type>
	I0308 04:03:47.111391  955924 main.go:141] libmachine: (force-systemd-env-292856)     <boot dev='cdrom'/>
	I0308 04:03:47.111398  955924 main.go:141] libmachine: (force-systemd-env-292856)     <boot dev='hd'/>
	I0308 04:03:47.111406  955924 main.go:141] libmachine: (force-systemd-env-292856)     <bootmenu enable='no'/>
	I0308 04:03:47.111417  955924 main.go:141] libmachine: (force-systemd-env-292856)   </os>
	I0308 04:03:47.111447  955924 main.go:141] libmachine: (force-systemd-env-292856)   <devices>
	I0308 04:03:47.111469  955924 main.go:141] libmachine: (force-systemd-env-292856)     <disk type='file' device='cdrom'>
	I0308 04:03:47.111484  955924 main.go:141] libmachine: (force-systemd-env-292856)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856/boot2docker.iso'/>
	I0308 04:03:47.111490  955924 main.go:141] libmachine: (force-systemd-env-292856)       <target dev='hdc' bus='scsi'/>
	I0308 04:03:47.111498  955924 main.go:141] libmachine: (force-systemd-env-292856)       <readonly/>
	I0308 04:03:47.111506  955924 main.go:141] libmachine: (force-systemd-env-292856)     </disk>
	I0308 04:03:47.111515  955924 main.go:141] libmachine: (force-systemd-env-292856)     <disk type='file' device='disk'>
	I0308 04:03:47.111525  955924 main.go:141] libmachine: (force-systemd-env-292856)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 04:03:47.111539  955924 main.go:141] libmachine: (force-systemd-env-292856)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/force-systemd-env-292856/force-systemd-env-292856.rawdisk'/>
	I0308 04:03:47.111547  955924 main.go:141] libmachine: (force-systemd-env-292856)       <target dev='hda' bus='virtio'/>
	I0308 04:03:47.111557  955924 main.go:141] libmachine: (force-systemd-env-292856)     </disk>
	I0308 04:03:47.111564  955924 main.go:141] libmachine: (force-systemd-env-292856)     <interface type='network'>
	I0308 04:03:47.111574  955924 main.go:141] libmachine: (force-systemd-env-292856)       <source network='mk-force-systemd-env-292856'/>
	I0308 04:03:47.111594  955924 main.go:141] libmachine: (force-systemd-env-292856)       <model type='virtio'/>
	I0308 04:03:47.111607  955924 main.go:141] libmachine: (force-systemd-env-292856)     </interface>
	I0308 04:03:47.111617  955924 main.go:141] libmachine: (force-systemd-env-292856)     <interface type='network'>
	I0308 04:03:47.111629  955924 main.go:141] libmachine: (force-systemd-env-292856)       <source network='default'/>
	I0308 04:03:47.111639  955924 main.go:141] libmachine: (force-systemd-env-292856)       <model type='virtio'/>
	I0308 04:03:47.111651  955924 main.go:141] libmachine: (force-systemd-env-292856)     </interface>
	I0308 04:03:47.111662  955924 main.go:141] libmachine: (force-systemd-env-292856)     <serial type='pty'>
	I0308 04:03:47.111670  955924 main.go:141] libmachine: (force-systemd-env-292856)       <target port='0'/>
	I0308 04:03:47.111677  955924 main.go:141] libmachine: (force-systemd-env-292856)     </serial>
	I0308 04:03:47.111686  955924 main.go:141] libmachine: (force-systemd-env-292856)     <console type='pty'>
	I0308 04:03:47.111704  955924 main.go:141] libmachine: (force-systemd-env-292856)       <target type='serial' port='0'/>
	I0308 04:03:47.111717  955924 main.go:141] libmachine: (force-systemd-env-292856)     </console>
	I0308 04:03:47.111725  955924 main.go:141] libmachine: (force-systemd-env-292856)     <rng model='virtio'>
	I0308 04:03:47.111738  955924 main.go:141] libmachine: (force-systemd-env-292856)       <backend model='random'>/dev/random</backend>
	I0308 04:03:47.111750  955924 main.go:141] libmachine: (force-systemd-env-292856)     </rng>
	I0308 04:03:47.111780  955924 main.go:141] libmachine: (force-systemd-env-292856)     
	I0308 04:03:47.111801  955924 main.go:141] libmachine: (force-systemd-env-292856)     
	I0308 04:03:47.111812  955924 main.go:141] libmachine: (force-systemd-env-292856)   </devices>
	I0308 04:03:47.111823  955924 main.go:141] libmachine: (force-systemd-env-292856) </domain>
	I0308 04:03:47.111839  955924 main.go:141] libmachine: (force-systemd-env-292856) 
	I0308 04:03:47.116135  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:3c:7d:80 in network default
	I0308 04:03:47.116892  955924 main.go:141] libmachine: (force-systemd-env-292856) Ensuring networks are active...
	I0308 04:03:47.116913  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:47.117664  955924 main.go:141] libmachine: (force-systemd-env-292856) Ensuring network default is active
	I0308 04:03:47.118034  955924 main.go:141] libmachine: (force-systemd-env-292856) Ensuring network mk-force-systemd-env-292856 is active
	I0308 04:03:47.118497  955924 main.go:141] libmachine: (force-systemd-env-292856) Getting domain xml...
	I0308 04:03:47.119243  955924 main.go:141] libmachine: (force-systemd-env-292856) Creating domain...
	I0308 04:03:48.500225  955924 main.go:141] libmachine: (force-systemd-env-292856) Waiting to get IP...
	I0308 04:03:48.501331  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:48.501912  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:48.501941  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:48.501864  956057 retry.go:31] will retry after 260.280022ms: waiting for machine to come up
	I0308 04:03:48.764293  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:48.764933  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:48.764977  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:48.764880  956057 retry.go:31] will retry after 252.539285ms: waiting for machine to come up
	I0308 04:03:49.019658  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:49.020155  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:49.020203  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:49.020112  956057 retry.go:31] will retry after 353.527625ms: waiting for machine to come up
	I0308 04:03:49.375833  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:49.376497  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:49.376534  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:49.376412  956057 retry.go:31] will retry after 373.968938ms: waiting for machine to come up
	I0308 04:03:49.752000  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:49.752570  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:49.752597  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:49.752524  956057 retry.go:31] will retry after 661.838659ms: waiting for machine to come up
	I0308 04:03:50.416118  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:50.417066  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:50.417093  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:50.417009  956057 retry.go:31] will retry after 849.952503ms: waiting for machine to come up
	I0308 04:03:51.268979  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | domain force-systemd-env-292856 has defined MAC address 52:54:00:b3:f2:aa in network mk-force-systemd-env-292856
	I0308 04:03:51.269944  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | unable to find current IP address of domain force-systemd-env-292856 in network mk-force-systemd-env-292856
	I0308 04:03:51.270113  955924 main.go:141] libmachine: (force-systemd-env-292856) DBG | I0308 04:03:51.270046  956057 retry.go:31] will retry after 738.854893ms: waiting for machine to come up
	I0308 04:03:50.415837  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) Calling .GetIP
	I0308 04:03:50.419308  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:50.419766  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:5b:5a", ip: ""} in network mk-kubernetes-upgrade-219954: {Iface:virbr1 ExpiryTime:2024-03-08 05:02:44 +0000 UTC Type:0 Mac:52:54:00:38:5b:5a Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:kubernetes-upgrade-219954 Clientid:01:52:54:00:38:5b:5a}
	I0308 04:03:50.419797  953583 main.go:141] libmachine: (kubernetes-upgrade-219954) DBG | domain kubernetes-upgrade-219954 has defined IP address 192.168.39.107 and MAC address 52:54:00:38:5b:5a in network mk-kubernetes-upgrade-219954
	I0308 04:03:50.420101  953583 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:03:50.427657  953583 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:03:50.427844  953583 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:03:50.427922  953583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:03:50.502824  953583 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:03:50.502857  953583 crio.go:415] Images already preloaded, skipping extraction
	I0308 04:03:50.502924  953583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:03:50.554219  953583 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:03:50.554252  953583 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:03:50.554263  953583 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:03:50.554445  953583 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-219954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:03:50.554546  953583 ssh_runner.go:195] Run: crio config
	I0308 04:03:50.647048  953583 cni.go:84] Creating CNI manager for ""
	I0308 04:03:50.647155  953583 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:03:50.647183  953583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:03:50.647224  953583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-219954 NodeName:kubernetes-upgrade-219954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:03:50.647423  953583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-219954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:03:50.647515  953583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:03:50.734851  953583 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:03:50.734953  953583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:03:50.760860  953583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0308 04:03:50.920990  953583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:03:51.122275  953583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0308 04:03:51.322840  953583 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0308 04:03:51.350098  953583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:03:51.613365  953583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:03:51.640357  953583 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954 for IP: 192.168.39.107
	I0308 04:03:51.640395  953583 certs.go:194] generating shared ca certs ...
	I0308 04:03:51.640423  953583 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:03:51.640618  953583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:03:51.640684  953583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:03:51.640701  953583 certs.go:256] generating profile certs ...
	I0308 04:03:51.640849  953583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key
	I0308 04:03:51.640918  953583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key.227e9756
	I0308 04:03:51.640967  953583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key
	I0308 04:03:51.641104  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:03:51.641139  953583 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:03:51.641149  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:03:51.641176  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:03:51.641202  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:03:51.641227  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:03:51.641287  953583 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:03:51.642141  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:03:51.730167  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:03:51.764448  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:03:51.822392  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:03:51.924933  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0308 04:03:51.958468  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:03:51.993609  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:03:52.030852  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:03:52.071008  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:03:52.111565  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:03:52.147007  953583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:03:52.179639  953583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:03:52.203052  953583 ssh_runner.go:195] Run: openssl version
	I0308 04:03:52.210624  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:03:52.223733  953583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:03:52.231621  953583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:03:52.231706  953583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:03:52.240686  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:03:52.256517  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:03:52.274408  953583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:52.281529  953583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:52.281607  953583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:03:52.291407  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:03:52.305904  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:03:52.323653  953583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:03:52.330015  953583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:03:52.330114  953583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:03:52.337169  953583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:03:52.349560  953583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:03:52.355654  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:03:52.362853  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:03:52.370183  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:03:52.377413  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:03:52.384687  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:03:52.393519  953583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:03:52.402285  953583 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-219954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-219954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:03:52.402423  953583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:03:52.402492  953583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:03:52.455300  953583 cri.go:89] found id: "72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde"
	I0308 04:03:52.455335  953583 cri.go:89] found id: "769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94"
	I0308 04:03:52.455341  953583 cri.go:89] found id: "b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19"
	I0308 04:03:52.455362  953583 cri.go:89] found id: "ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e"
	I0308 04:03:52.455376  953583 cri.go:89] found id: "1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e"
	I0308 04:03:52.455380  953583 cri.go:89] found id: "5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b"
	I0308 04:03:52.455384  953583 cri.go:89] found id: "50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e"
	I0308 04:03:52.455388  953583 cri.go:89] found id: "3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829"
	I0308 04:03:52.455392  953583 cri.go:89] found id: ""
	I0308 04:03:52.455450  953583 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.763286258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870643763253966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7109a59-dfb5-4f43-b497-d1978d34dbaa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.764359054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f20b18e-d184-4e16-8fe5-901833fab262 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.764438912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f20b18e-d184-4e16-8fe5-901833fab262 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.764776322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf946dd75a7404dd39c19cd1342080953d1beb621df8d1276f292f35a3572359,PodSandboxId:f3adec48b571d4e65d492a0f59059c34c7ec1304a017e2abe2bc2596ffc89195,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640068694263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad80bb96bbef4663b4e906a01cee5a89e619b57861b0be59a0eebf1b3244ef60,PodSandboxId:619951a4d0bd46e78afc39055f213391a7c00b3598050df2268f0f191dd28200,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640097058250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b8f5a8e3bc029170a3a19d571e546c60059af197cc77bb04168053260d72bc,PodSandboxId:602302ccb665977c37b5182e9f7bc14e8302d8dcd2265f79647489ca24723058,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER
_RUNNING,CreatedAt:1709870640018034371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9808cec5572b811b7ad8f91c434a6a5033deea5246c7f6eb5247fdc775610195,PodSandboxId:762f9de873eb810e07757401bdecfc624514c7fe974df20957b62a98ceb2e37a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17098
70640071524397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45bc9d8b0ebe40ba9a82b034fa1d0c0d2617443dd556d3c0a907caea23057e8a,PodSandboxId:18617a6381efc3edaea5de7042d0694878da4e1670edf2ab06984dabdb9ae603,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709870635493582892,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a9df26841c18311d5de4dc5dc5916beb36171f1981a8338a824609fcc48da1,PodSandboxId:40300c2e19844bc4a1bd7e8d23a131d5c4599cf9233fb8fe401a866a484ce97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709870635459282729,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70877bc22bbc0ad63c82a026b1a213a28168d764daa6e7ac8c8d810b15dfa875,PodSandboxId:f903dca5ca1eb5ee357af843c844128ab1dfb238ba8b7df1e08087452f68fbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709870635460343607,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8e0a7e9538b2702195d94254f52b4a2d82634baa16acc8ad244e45b712bb2c,PodSandboxId:4a851596e2605ceaa04bf3e44901bb290200d5902e23668e250d359ea3cf734a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709870635437904344,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19,PodSandboxId:fdfb8a2946a3afaa983f0749f33abd01420bbfc1267cb633b04d2f94977dd061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709870628597942822,Labels:map[string]stri
ng{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e,PodSandboxId:1d48e6c8c11d2f68cd0b5b891140afb1b6636828566009ee6734a1c8500c7bbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709870628289217707,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde,PodSandboxId:dc67f676a943ff575e7bf793ebeb1b378178f6cbeb02b685e89361463ef7d805,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629139678612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94,PodSandboxId:7d7c31251ffbbe1c6cfde063db9782edf6a41e00105471148a6d80ffd23a9b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7b
d410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629106339981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e,PodSandboxId:0fa5edd97a6588a661b95a6465a29c0068a351695024004d140707f0a7264498,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709870628221118575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b,PodSandboxId:cbbc9ff63b982c47f8b141cc6d2e257f1bf29ecad47314ba1877056976d7ddad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709870628106514838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e,PodSandboxId:17801f336354c0e8d23c6702e5cb1dc5ce780d031d659eca02bbc53790cc827c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709870628085338086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829,PodSandboxId:1ec3e4814f54fe4cae104bec8a8185001bf65e0d9980304a54c13a88858097c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt
:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709870627911610843,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f20b18e-d184-4e16-8fe5-901833fab262 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.819177598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa0cb35e-1732-4fae-b041-66044d9786fd name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.819279049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa0cb35e-1732-4fae-b041-66044d9786fd name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.821355499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1dcc96d-bb5c-40a9-9e15-e1eea719eef9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.822029363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870643821993432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1dcc96d-bb5c-40a9-9e15-e1eea719eef9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.823186928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84cba8af-6922-4b5f-a833-31f8c2eef455 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.823262354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84cba8af-6922-4b5f-a833-31f8c2eef455 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.823677660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf946dd75a7404dd39c19cd1342080953d1beb621df8d1276f292f35a3572359,PodSandboxId:f3adec48b571d4e65d492a0f59059c34c7ec1304a017e2abe2bc2596ffc89195,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640068694263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad80bb96bbef4663b4e906a01cee5a89e619b57861b0be59a0eebf1b3244ef60,PodSandboxId:619951a4d0bd46e78afc39055f213391a7c00b3598050df2268f0f191dd28200,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640097058250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b8f5a8e3bc029170a3a19d571e546c60059af197cc77bb04168053260d72bc,PodSandboxId:602302ccb665977c37b5182e9f7bc14e8302d8dcd2265f79647489ca24723058,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER
_RUNNING,CreatedAt:1709870640018034371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9808cec5572b811b7ad8f91c434a6a5033deea5246c7f6eb5247fdc775610195,PodSandboxId:762f9de873eb810e07757401bdecfc624514c7fe974df20957b62a98ceb2e37a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17098
70640071524397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45bc9d8b0ebe40ba9a82b034fa1d0c0d2617443dd556d3c0a907caea23057e8a,PodSandboxId:18617a6381efc3edaea5de7042d0694878da4e1670edf2ab06984dabdb9ae603,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709870635493582892,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a9df26841c18311d5de4dc5dc5916beb36171f1981a8338a824609fcc48da1,PodSandboxId:40300c2e19844bc4a1bd7e8d23a131d5c4599cf9233fb8fe401a866a484ce97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709870635459282729,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70877bc22bbc0ad63c82a026b1a213a28168d764daa6e7ac8c8d810b15dfa875,PodSandboxId:f903dca5ca1eb5ee357af843c844128ab1dfb238ba8b7df1e08087452f68fbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709870635460343607,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8e0a7e9538b2702195d94254f52b4a2d82634baa16acc8ad244e45b712bb2c,PodSandboxId:4a851596e2605ceaa04bf3e44901bb290200d5902e23668e250d359ea3cf734a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709870635437904344,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19,PodSandboxId:fdfb8a2946a3afaa983f0749f33abd01420bbfc1267cb633b04d2f94977dd061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709870628597942822,Labels:map[string]stri
ng{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e,PodSandboxId:1d48e6c8c11d2f68cd0b5b891140afb1b6636828566009ee6734a1c8500c7bbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709870628289217707,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde,PodSandboxId:dc67f676a943ff575e7bf793ebeb1b378178f6cbeb02b685e89361463ef7d805,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629139678612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94,PodSandboxId:7d7c31251ffbbe1c6cfde063db9782edf6a41e00105471148a6d80ffd23a9b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7b
d410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629106339981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e,PodSandboxId:0fa5edd97a6588a661b95a6465a29c0068a351695024004d140707f0a7264498,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709870628221118575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b,PodSandboxId:cbbc9ff63b982c47f8b141cc6d2e257f1bf29ecad47314ba1877056976d7ddad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709870628106514838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e,PodSandboxId:17801f336354c0e8d23c6702e5cb1dc5ce780d031d659eca02bbc53790cc827c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709870628085338086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829,PodSandboxId:1ec3e4814f54fe4cae104bec8a8185001bf65e0d9980304a54c13a88858097c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt
:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709870627911610843,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84cba8af-6922-4b5f-a833-31f8c2eef455 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.881411642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5cde6af-3229-4c76-ad88-5944fce59faa name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.881558197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5cde6af-3229-4c76-ad88-5944fce59faa name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.883103623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c77e3e92-1a15-4a55-857a-79c240d628cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.883667804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870643883632039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c77e3e92-1a15-4a55-857a-79c240d628cf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.884723191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10267017-970d-4b10-ab4f-0c20b2b050e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.884806566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10267017-970d-4b10-ab4f-0c20b2b050e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.885412698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf946dd75a7404dd39c19cd1342080953d1beb621df8d1276f292f35a3572359,PodSandboxId:f3adec48b571d4e65d492a0f59059c34c7ec1304a017e2abe2bc2596ffc89195,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640068694263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad80bb96bbef4663b4e906a01cee5a89e619b57861b0be59a0eebf1b3244ef60,PodSandboxId:619951a4d0bd46e78afc39055f213391a7c00b3598050df2268f0f191dd28200,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640097058250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b8f5a8e3bc029170a3a19d571e546c60059af197cc77bb04168053260d72bc,PodSandboxId:602302ccb665977c37b5182e9f7bc14e8302d8dcd2265f79647489ca24723058,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER
_RUNNING,CreatedAt:1709870640018034371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9808cec5572b811b7ad8f91c434a6a5033deea5246c7f6eb5247fdc775610195,PodSandboxId:762f9de873eb810e07757401bdecfc624514c7fe974df20957b62a98ceb2e37a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17098
70640071524397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45bc9d8b0ebe40ba9a82b034fa1d0c0d2617443dd556d3c0a907caea23057e8a,PodSandboxId:18617a6381efc3edaea5de7042d0694878da4e1670edf2ab06984dabdb9ae603,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709870635493582892,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a9df26841c18311d5de4dc5dc5916beb36171f1981a8338a824609fcc48da1,PodSandboxId:40300c2e19844bc4a1bd7e8d23a131d5c4599cf9233fb8fe401a866a484ce97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709870635459282729,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70877bc22bbc0ad63c82a026b1a213a28168d764daa6e7ac8c8d810b15dfa875,PodSandboxId:f903dca5ca1eb5ee357af843c844128ab1dfb238ba8b7df1e08087452f68fbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709870635460343607,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8e0a7e9538b2702195d94254f52b4a2d82634baa16acc8ad244e45b712bb2c,PodSandboxId:4a851596e2605ceaa04bf3e44901bb290200d5902e23668e250d359ea3cf734a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709870635437904344,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19,PodSandboxId:fdfb8a2946a3afaa983f0749f33abd01420bbfc1267cb633b04d2f94977dd061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709870628597942822,Labels:map[string]stri
ng{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e,PodSandboxId:1d48e6c8c11d2f68cd0b5b891140afb1b6636828566009ee6734a1c8500c7bbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709870628289217707,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde,PodSandboxId:dc67f676a943ff575e7bf793ebeb1b378178f6cbeb02b685e89361463ef7d805,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629139678612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94,PodSandboxId:7d7c31251ffbbe1c6cfde063db9782edf6a41e00105471148a6d80ffd23a9b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7b
d410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629106339981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e,PodSandboxId:0fa5edd97a6588a661b95a6465a29c0068a351695024004d140707f0a7264498,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709870628221118575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b,PodSandboxId:cbbc9ff63b982c47f8b141cc6d2e257f1bf29ecad47314ba1877056976d7ddad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709870628106514838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e,PodSandboxId:17801f336354c0e8d23c6702e5cb1dc5ce780d031d659eca02bbc53790cc827c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709870628085338086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829,PodSandboxId:1ec3e4814f54fe4cae104bec8a8185001bf65e0d9980304a54c13a88858097c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt
:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709870627911610843,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10267017-970d-4b10-ab4f-0c20b2b050e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.934749902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa366237-59f0-4694-a0d6-f881cb7fe049 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.934923830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa366237-59f0-4694-a0d6-f881cb7fe049 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.936041327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3257a170-f741-459f-9c64-b623c72ce6fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.936391808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870643936369452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3257a170-f741-459f-9c64-b623c72ce6fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.937098321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a32c50a9-00c2-4e51-9107-c731b80fed64 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.937154765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a32c50a9-00c2-4e51-9107-c731b80fed64 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:04:03 kubernetes-upgrade-219954 crio[2814]: time="2024-03-08 04:04:03.937561582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf946dd75a7404dd39c19cd1342080953d1beb621df8d1276f292f35a3572359,PodSandboxId:f3adec48b571d4e65d492a0f59059c34c7ec1304a017e2abe2bc2596ffc89195,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640068694263,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad80bb96bbef4663b4e906a01cee5a89e619b57861b0be59a0eebf1b3244ef60,PodSandboxId:619951a4d0bd46e78afc39055f213391a7c00b3598050df2268f0f191dd28200,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709870640097058250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b8f5a8e3bc029170a3a19d571e546c60059af197cc77bb04168053260d72bc,PodSandboxId:602302ccb665977c37b5182e9f7bc14e8302d8dcd2265f79647489ca24723058,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER
_RUNNING,CreatedAt:1709870640018034371,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9808cec5572b811b7ad8f91c434a6a5033deea5246c7f6eb5247fdc775610195,PodSandboxId:762f9de873eb810e07757401bdecfc624514c7fe974df20957b62a98ceb2e37a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17098
70640071524397,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45bc9d8b0ebe40ba9a82b034fa1d0c0d2617443dd556d3c0a907caea23057e8a,PodSandboxId:18617a6381efc3edaea5de7042d0694878da4e1670edf2ab06984dabdb9ae603,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709870635493582892,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a9df26841c18311d5de4dc5dc5916beb36171f1981a8338a824609fcc48da1,PodSandboxId:40300c2e19844bc4a1bd7e8d23a131d5c4599cf9233fb8fe401a866a484ce97d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709870635459282729,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70877bc22bbc0ad63c82a026b1a213a28168d764daa6e7ac8c8d810b15dfa875,PodSandboxId:f903dca5ca1eb5ee357af843c844128ab1dfb238ba8b7df1e08087452f68fbd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709870635460343607,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8e0a7e9538b2702195d94254f52b4a2d82634baa16acc8ad244e45b712bb2c,PodSandboxId:4a851596e2605ceaa04bf3e44901bb290200d5902e23668e250d359ea3cf734a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709870635437904344,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19,PodSandboxId:fdfb8a2946a3afaa983f0749f33abd01420bbfc1267cb633b04d2f94977dd061,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709870628597942822,Labels:map[string]stri
ng{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f00ce6-c92b-4db0-b058-b32a3a0e6329,},Annotations:map[string]string{io.kubernetes.container.hash: 11eee2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e,PodSandboxId:1d48e6c8c11d2f68cd0b5b891140afb1b6636828566009ee6734a1c8500c7bbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1709870628289217707,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xkh7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04664e7f-dab0-4bc6-bd0a-74fefeb98997,},Annotations:map[string]string{io.kubernetes.container.hash: bc1c667e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde,PodSandboxId:dc67f676a943ff575e7bf793ebeb1b378178f6cbeb02b685e89361463ef7d805,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629139678612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.n
ame: coredns-76f75df574-9vn5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ce3d7f7-61ab-4a3c-8f96-6f16e353884f,},Annotations:map[string]string{io.kubernetes.container.hash: b4fc1a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94,PodSandboxId:7d7c31251ffbbe1c6cfde063db9782edf6a41e00105471148a6d80ffd23a9b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7b
d410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1709870629106339981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-56hkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d,},Annotations:map[string]string{io.kubernetes.container.hash: fbfb3ea8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e,PodSandboxId:0fa5edd97a6588a661b95a6465a29c0068a351695024004d140707f0a7264498,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709870628221118575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0845c7ce3499f82c702682427bb1dd,},Annotations:map[string]string{io.kubernetes.container.hash: 529f5be9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b,PodSandboxId:cbbc9ff63b982c47f8b141cc6d2e257f1bf29ecad47314ba1877056976d7ddad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},I
mage:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709870628106514838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fddf2bea8fd46118453b36762cb1522c,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e,PodSandboxId:17801f336354c0e8d23c6702e5cb1dc5ce780d031d659eca02bbc53790cc827c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709870628085338086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b240e5dae52e343a17d604fbdd651a,},Annotations:map[string]string{io.kubernetes.container.hash: 6b8b12bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829,PodSandboxId:1ec3e4814f54fe4cae104bec8a8185001bf65e0d9980304a54c13a88858097c2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt
:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709870627911610843,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-219954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75e7beec97d6c4069e5c092f59280983,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a32c50a9-00c2-4e51-9107-c731b80fed64 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad80bb96bbef4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   619951a4d0bd4       coredns-76f75df574-9vn5s
	9808cec5572b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   762f9de873eb8       storage-provisioner
	cf946dd75a740       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   f3adec48b571d       coredns-76f75df574-56hkc
	c1b8f5a8e3bc0       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   4 seconds ago       Running             kube-proxy                2                   602302ccb6659       kube-proxy-xkh7c
	45bc9d8b0ebe4       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   8 seconds ago       Running             kube-scheduler            2                   18617a6381efc       kube-scheduler-kubernetes-upgrade-219954
	70877bc22bbc0       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   8 seconds ago       Running             kube-apiserver            2                   f903dca5ca1eb       kube-apiserver-kubernetes-upgrade-219954
	67a9df26841c1       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   8 seconds ago       Running             etcd                      2                   40300c2e19844       etcd-kubernetes-upgrade-219954
	4e8e0a7e9538b       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   8 seconds ago       Running             kube-controller-manager   2                   4a851596e2605       kube-controller-manager-kubernetes-upgrade-219954
	72aed2c2a4191       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Exited              coredns                   1                   dc67f676a943f       coredns-76f75df574-9vn5s
	769fde8db5ebe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Exited              coredns                   1                   7d7c31251ffbb       coredns-76f75df574-56hkc
	b5bd864b9437f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   fdfb8a2946a3a       storage-provisioner
	ae38708663164       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   15 seconds ago      Exited              kube-proxy                1                   1d48e6c8c11d2       kube-proxy-xkh7c
	1599017ef0196       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   15 seconds ago      Exited              etcd                      1                   0fa5edd97a658       etcd-kubernetes-upgrade-219954
	5fbf386c2a56f       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   15 seconds ago      Exited              kube-controller-manager   1                   cbbc9ff63b982       kube-controller-manager-kubernetes-upgrade-219954
	50a527674e1ea       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   15 seconds ago      Exited              kube-apiserver            1                   17801f336354c       kube-apiserver-kubernetes-upgrade-219954
	3ef2775a66bd1       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   16 seconds ago      Exited              kube-scheduler            1                   1ec3e4814f54f       kube-scheduler-kubernetes-upgrade-219954
	
	
	==> coredns [72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde] <==
	
	
	==> coredns [769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94] <==
	
	
	==> coredns [ad80bb96bbef4663b4e906a01cee5a89e619b57861b0be59a0eebf1b3244ef60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cf946dd75a7404dd39c19cd1342080953d1beb621df8d1276f292f35a3572359] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-219954
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-219954
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-219954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:03:59 +0000   Fri, 08 Mar 2024 04:03:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:03:59 +0000   Fri, 08 Mar 2024 04:03:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:03:59 +0000   Fri, 08 Mar 2024 04:03:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:03:59 +0000   Fri, 08 Mar 2024 04:03:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    kubernetes-upgrade-219954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 723057dd85ba4216844ca9f83af99d5f
	  System UUID:                723057dd-85ba-4216-844c-a9f83af99d5f
	  Boot ID:                    c835bcc7-3d4a-46f0-85d4-714ace86d375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-56hkc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 coredns-76f75df574-9vn5s                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 etcd-kubernetes-upgrade-219954                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kube-apiserver-kubernetes-upgrade-219954             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-219954    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-proxy-xkh7c                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-219954             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-219954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-219954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)  kubelet          Node kubernetes-upgrade-219954 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           44s                node-controller  Node kubernetes-upgrade-219954 event: Registered Node kubernetes-upgrade-219954 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.752250] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.065417] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079734] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.228762] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.126622] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.260495] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +5.170195] systemd-fstab-generator[724]: Ignoring "noauto" option for root device
	[  +0.066797] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.328286] systemd-fstab-generator[854]: Ignoring "noauto" option for root device
	[Mar 8 04:03] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.082779] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.302026] kauditd_printk_skb: 18 callbacks suppressed
	[ +31.473817] systemd-fstab-generator[2041]: Ignoring "noauto" option for root device
	[  +0.131701] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.167514] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.435202] systemd-fstab-generator[2267]: Ignoring "noauto" option for root device
	[  +0.287974] systemd-fstab-generator[2348]: Ignoring "noauto" option for root device
	[  +0.859082] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +2.520175] systemd-fstab-generator[3277]: Ignoring "noauto" option for root device
	[  +2.865856] systemd-fstab-generator[3514]: Ignoring "noauto" option for root device
	[  +0.105153] kauditd_printk_skb: 286 callbacks suppressed
	[  +5.709248] kauditd_printk_skb: 40 callbacks suppressed
	[Mar 8 04:04] systemd-fstab-generator[4032]: Ignoring "noauto" option for root device
	
	
	==> etcd [1599017ef01966130986baae9e3c60c79ec66854843d2f6551343b7d2f620c5e] <==
	{"level":"warn","ts":"2024-03-08T04:03:49.0301Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-08T04:03:49.030195Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.107:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.107:2380","--initial-cluster=kubernetes-upgrade-219954=https://192.168.39.107:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.107:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.107:2380","--name=kubernetes-upgrade-219954","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-08T04:03:49.030319Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-08T04:03:49.030358Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-08T04:03:49.030376Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2024-03-08T04:03:49.030415Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:03:49.050975Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"]}
	{"level":"info","ts":"2024-03-08T04:03:49.053185Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.10","git-sha":"0223ca52b","go-version":"go1.20.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-219954","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-08T04:03:49.080227Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"26.614497ms"}
	{"level":"info","ts":"2024-03-08T04:03:49.118532Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-08T04:03:49.192963Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","commit-index":402}
	{"level":"info","ts":"2024-03-08T04:03:49.193119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-08T04:03:49.193145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became follower at term 2"}
	{"level":"info","ts":"2024-03-08T04:03:49.193164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ec1614c5c0f7335e [peers: [], term: 2, commit: 402, applied: 0, lastindex: 402, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-08T04:03:49.208981Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [67a9df26841c18311d5de4dc5dc5916beb36171f1981a8338a824609fcc48da1] <==
	{"level":"info","ts":"2024-03-08T04:03:55.95756Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:03:55.957571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:03:55.95779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e switched to configuration voters=(17011807482017166174)"}
	{"level":"info","ts":"2024-03-08T04:03:55.957961Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2024-03-08T04:03:55.958555Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:03:55.958583Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:03:55.991485Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:03:55.999889Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-03-08T04:03:55.99995Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-03-08T04:03:56.001174Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:03:56.001239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:03:56.896916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T04:03:56.896984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:03:56.897027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-03-08T04:03:56.897041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:03:56.897055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-03-08T04:03:56.89707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-03-08T04:03:56.897102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-03-08T04:03:56.903681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:03:56.903749Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:03:56.903775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:03:56.91311Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-03-08T04:03:56.918898Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:03:56.920583Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:03:56.925876Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:kubernetes-upgrade-219954 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	
	
	==> kernel <==
	 04:04:04 up 1 min,  0 users,  load average: 2.29, 0.61, 0.21
	Linux kubernetes-upgrade-219954 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50a527674e1eaafa25b5af1fa925aa1dfbfa73d26aa04b83c1f2a38227121c8e] <==
	I0308 04:03:49.177301       1 options.go:222] external host was not specified, using 192.168.39.107
	I0308 04:03:49.198963       1 server.go:148] Version: v1.29.0-rc.2
	I0308 04:03:49.199035       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [70877bc22bbc0ad63c82a026b1a213a28168d764daa6e7ac8c8d810b15dfa875] <==
	I0308 04:03:58.890639       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:03:58.891392       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:03:58.892947       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 04:03:58.893156       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 04:03:59.033739       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 04:03:59.035058       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 04:03:59.035136       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 04:03:59.037598       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 04:03:59.041048       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0308 04:03:59.041111       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0308 04:03:59.041244       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0308 04:03:59.081251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0308 04:03:59.094267       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 04:03:59.095410       1 aggregator.go:165] initial CRD sync complete...
	I0308 04:03:59.095503       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 04:03:59.095533       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 04:03:59.095638       1 cache.go:39] Caches are synced for autoregister controller
	I0308 04:03:59.100097       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 04:03:59.846428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 04:04:00.373598       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 04:04:01.202079       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 04:04:01.219561       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 04:04:01.287034       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 04:04:01.326759       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 04:04:01.340774       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4e8e0a7e9538b2702195d94254f52b4a2d82634baa16acc8ad244e45b712bb2c] <==
	I0308 04:03:56.463181       1 serving.go:380] Generated self-signed cert in-memory
	I0308 04:03:57.092443       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0308 04:03:57.092512       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:03:57.098019       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:03:57.098245       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:03:57.102393       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:03:57.102527       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0308 04:04:00.921280       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0308 04:04:00.921353       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0308 04:04:00.921594       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0308 04:04:00.949655       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0308 04:04:00.949814       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0308 04:04:00.954676       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	E0308 04:04:00.962812       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0308 04:04:00.962933       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0308 04:04:01.022550       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [5fbf386c2a56ff8209384ce757e60b06a464d4c2e82297c3da46a60cf389415b] <==
	
	
	==> kube-proxy [ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e] <==
	
	
	==> kube-proxy [c1b8f5a8e3bc029170a3a19d571e546c60059af197cc77bb04168053260d72bc] <==
	I0308 04:04:00.603014       1 server_others.go:72] "Using iptables proxy"
	I0308 04:04:00.636655       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.107"]
	I0308 04:04:00.736447       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0308 04:04:00.736521       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:04:00.736546       1 server_others.go:168] "Using iptables Proxier"
	I0308 04:04:00.739676       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:04:00.740108       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0308 04:04:00.740145       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:04:00.741730       1 config.go:188] "Starting service config controller"
	I0308 04:04:00.741741       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:04:00.741761       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:04:00.741765       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:04:00.742222       1 config.go:315] "Starting node config controller"
	I0308 04:04:00.742270       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:04:00.842307       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 04:04:00.842486       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:04:00.842992       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3ef2775a66bd1916b469b86a36904f26e939755148fa926522b620a522fe6829] <==
	
	
	==> kube-scheduler [45bc9d8b0ebe40ba9a82b034fa1d0c0d2617443dd556d3c0a907caea23057e8a] <==
	W0308 04:03:59.012084       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 04:03:59.012129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 04:03:59.012178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 04:03:59.012188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 04:03:59.012225       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:03:59.012234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 04:03:59.012287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 04:03:59.012295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 04:03:59.012331       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 04:03:59.012338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 04:03:59.012370       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 04:03:59.012410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 04:03:59.012457       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 04:03:59.012466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 04:03:59.012499       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 04:03:59.012507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 04:03:59.012541       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:03:59.012549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 04:03:59.012581       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 04:03:59.012619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 04:03:59.012664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:03:59.012673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 04:03:59.012745       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 04:03:59.012756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0308 04:03:59.092250       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:03:55 kubernetes-upgrade-219954 kubelet[3521]: E0308 04:03:55.795655    3521 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-219954&limit=500&resourceVersion=0": dial tcp 192.168.39.107:8443: connect: connection refused
	Mar 08 04:03:56 kubernetes-upgrade-219954 kubelet[3521]: W0308 04:03:56.037162    3521 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.107:8443: connect: connection refused
	Mar 08 04:03:56 kubernetes-upgrade-219954 kubelet[3521]: E0308 04:03:56.037505    3521 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.107:8443: connect: connection refused
	Mar 08 04:03:56 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:56.234321    3521 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-219954"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.086412    3521 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-219954"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.086520    3521 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-219954"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.089213    3521 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.090378    3521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: E0308 04:03:59.358008    3521 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-219954\" already exists" pod="kube-system/etcd-kubernetes-upgrade-219954"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: E0308 04:03:59.361197    3521 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-219954\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-219954"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.659367    3521 apiserver.go:52] "Watching apiserver"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.664528    3521 topology_manager.go:215] "Topology Admit Handler" podUID="35f00ce6-c92b-4db0-b058-b32a3a0e6329" podNamespace="kube-system" podName="storage-provisioner"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.664674    3521 topology_manager.go:215] "Topology Admit Handler" podUID="04664e7f-dab0-4bc6-bd0a-74fefeb98997" podNamespace="kube-system" podName="kube-proxy-xkh7c"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.664779    3521 topology_manager.go:215] "Topology Admit Handler" podUID="9ceaaed8-b64c-44c9-8bc1-eb8d0b914b1d" podNamespace="kube-system" podName="coredns-76f75df574-56hkc"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.665486    3521 topology_manager.go:215] "Topology Admit Handler" podUID="9ce3d7f7-61ab-4a3c-8f96-6f16e353884f" podNamespace="kube-system" podName="coredns-76f75df574-9vn5s"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.693378    3521 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.780967    3521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04664e7f-dab0-4bc6-bd0a-74fefeb98997-lib-modules\") pod \"kube-proxy-xkh7c\" (UID: \"04664e7f-dab0-4bc6-bd0a-74fefeb98997\") " pod="kube-system/kube-proxy-xkh7c"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.781091    3521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/35f00ce6-c92b-4db0-b058-b32a3a0e6329-tmp\") pod \"storage-provisioner\" (UID: \"35f00ce6-c92b-4db0-b058-b32a3a0e6329\") " pod="kube-system/storage-provisioner"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.781117    3521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04664e7f-dab0-4bc6-bd0a-74fefeb98997-xtables-lock\") pod \"kube-proxy-xkh7c\" (UID: \"04664e7f-dab0-4bc6-bd0a-74fefeb98997\") " pod="kube-system/kube-proxy-xkh7c"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.966201    3521 scope.go:117] "RemoveContainer" containerID="b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.966779    3521 scope.go:117] "RemoveContainer" containerID="ae38708663164fee5e3b4fb93796cafe5cb2c7db25cbb109034f9939a2c3b02e"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.967304    3521 scope.go:117] "RemoveContainer" containerID="72aed2c2a4191165c6c4613b8b965a6623ef938a4f303141b3d83051ec9f9fde"
	Mar 08 04:03:59 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:03:59.968132    3521 scope.go:117] "RemoveContainer" containerID="769fde8db5ebeb626d3e81e8d98139f0abe728758adb5b3f19043ea93dc9fc94"
	Mar 08 04:04:02 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:04:02.140077    3521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 08 04:04:02 kubernetes-upgrade-219954 kubelet[3521]: I0308 04:04:02.795642    3521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [9808cec5572b811b7ad8f91c434a6a5033deea5246c7f6eb5247fdc775610195] <==
	I0308 04:04:00.337446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:04:00.363142       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:04:00.363218       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:04:00.412676       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:04:00.412884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-219954_bd8e4989-a95b-41bf-957f-abe60cc4a83e!
	I0308 04:04:00.413670       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cb486e2-e473-440e-b4a5-b5bd048748b4", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-219954_bd8e4989-a95b-41bf-957f-abe60cc4a83e became leader
	I0308 04:04:00.514981       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-219954_bd8e4989-a95b-41bf-957f-abe60cc4a83e!
	
	
	==> storage-provisioner [b5bd864b9437f645bc8870084b17b9b8f4f21ae28df64d876bf471637433cd19] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:04:03.320327  956523 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18333-911675/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-219954 -n kubernetes-upgrade-219954
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-219954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-219954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-219954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-219954: (1.156262772s)
--- FAIL: TestKubernetesUpgrade (401.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-851116 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-851116 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.695054525s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-851116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-851116" primary control-plane node in "pause-851116" cluster
	* Updating the running kvm2 "pause-851116" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-851116" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:01:15.867965  951650 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:01:15.868102  951650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:01:15.868117  951650 out.go:304] Setting ErrFile to fd 2...
	I0308 04:01:15.868124  951650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:01:15.868362  951650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:01:15.869004  951650 out.go:298] Setting JSON to false
	I0308 04:01:15.870081  951650 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27802,"bootTime":1709842674,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:01:15.870172  951650 start.go:139] virtualization: kvm guest
	I0308 04:01:15.872594  951650 out.go:177] * [pause-851116] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:01:15.874019  951650 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:01:15.875522  951650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:01:15.874047  951650 notify.go:220] Checking for updates...
	I0308 04:01:15.878505  951650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:01:15.880314  951650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:01:15.881573  951650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:01:15.882774  951650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:01:15.884324  951650 config.go:182] Loaded profile config "pause-851116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:01:15.884773  951650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:01:15.884825  951650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:01:15.904940  951650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I0308 04:01:15.905369  951650 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:01:15.905893  951650 main.go:141] libmachine: Using API Version  1
	I0308 04:01:15.905935  951650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:01:15.906367  951650 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:01:15.906557  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:15.906856  951650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:01:15.907124  951650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:01:15.907157  951650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:01:15.921639  951650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0308 04:01:15.922175  951650 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:01:15.922815  951650 main.go:141] libmachine: Using API Version  1
	I0308 04:01:15.922866  951650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:01:15.923200  951650 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:01:15.923472  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:15.959812  951650 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:01:15.961190  951650 start.go:297] selected driver: kvm2
	I0308 04:01:15.961213  951650 start.go:901] validating driver "kvm2" against &{Name:pause-851116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.4 ClusterName:pause-851116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:01:15.961455  951650 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:01:15.961950  951650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:01:15.962064  951650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:01:15.977004  951650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:01:15.978047  951650 cni.go:84] Creating CNI manager for ""
	I0308 04:01:15.978073  951650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:01:15.978171  951650 start.go:340] cluster config:
	{Name:pause-851116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-851116 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:01:15.978359  951650 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:01:15.980253  951650 out.go:177] * Starting "pause-851116" primary control-plane node in "pause-851116" cluster
	I0308 04:01:15.981560  951650 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:01:15.981597  951650 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 04:01:15.981605  951650 cache.go:56] Caching tarball of preloaded images
	I0308 04:01:15.981718  951650 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:01:15.981732  951650 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 04:01:15.981895  951650 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/config.json ...
	I0308 04:01:15.982123  951650 start.go:360] acquireMachinesLock for pause-851116: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:01:25.334710  951650 start.go:364] duration metric: took 9.352532051s to acquireMachinesLock for "pause-851116"
	I0308 04:01:25.334757  951650 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:01:25.334769  951650 fix.go:54] fixHost starting: 
	I0308 04:01:25.335172  951650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:01:25.335227  951650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:01:25.355768  951650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0308 04:01:25.356196  951650 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:01:25.356872  951650 main.go:141] libmachine: Using API Version  1
	I0308 04:01:25.356897  951650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:01:25.357344  951650 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:01:25.357585  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:25.357765  951650 main.go:141] libmachine: (pause-851116) Calling .GetState
	I0308 04:01:25.359497  951650 fix.go:112] recreateIfNeeded on pause-851116: state=Running err=<nil>
	W0308 04:01:25.359519  951650 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:01:25.361211  951650 out.go:177] * Updating the running kvm2 "pause-851116" VM ...
	I0308 04:01:25.362545  951650 machine.go:94] provisionDockerMachine start ...
	I0308 04:01:25.362565  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:25.362825  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:25.365522  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.366047  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.366073  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.366263  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:25.366445  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.366622  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.366788  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:25.367361  951650 main.go:141] libmachine: Using SSH client type: native
	I0308 04:01:25.367661  951650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0308 04:01:25.367671  951650 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:01:25.478611  951650 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851116
	
	I0308 04:01:25.478647  951650 main.go:141] libmachine: (pause-851116) Calling .GetMachineName
	I0308 04:01:25.478939  951650 buildroot.go:166] provisioning hostname "pause-851116"
	I0308 04:01:25.478975  951650 main.go:141] libmachine: (pause-851116) Calling .GetMachineName
	I0308 04:01:25.479173  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:25.482183  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.482582  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.482621  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.482808  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:25.483016  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.483221  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.483354  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:25.483571  951650 main.go:141] libmachine: Using SSH client type: native
	I0308 04:01:25.483763  951650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0308 04:01:25.483782  951650 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-851116 && echo "pause-851116" | sudo tee /etc/hostname
	I0308 04:01:25.614571  951650 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-851116
	
	I0308 04:01:25.614603  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:25.617788  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.618135  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.618168  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.618341  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:25.618539  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.618679  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.618860  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:25.619075  951650 main.go:141] libmachine: Using SSH client type: native
	I0308 04:01:25.619298  951650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0308 04:01:25.619316  951650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-851116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-851116/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-851116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:01:25.731394  951650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:01:25.731426  951650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:01:25.731479  951650 buildroot.go:174] setting up certificates
	I0308 04:01:25.731491  951650 provision.go:84] configureAuth start
	I0308 04:01:25.731505  951650 main.go:141] libmachine: (pause-851116) Calling .GetMachineName
	I0308 04:01:25.731825  951650 main.go:141] libmachine: (pause-851116) Calling .GetIP
	I0308 04:01:25.735062  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.735551  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.735583  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.735854  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:25.738588  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.739005  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.739055  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.739268  951650 provision.go:143] copyHostCerts
	I0308 04:01:25.739347  951650 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:01:25.739368  951650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:01:25.739430  951650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:01:25.739539  951650 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:01:25.739556  951650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:01:25.739587  951650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:01:25.739682  951650 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:01:25.739691  951650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:01:25.739710  951650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:01:25.739765  951650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.pause-851116 san=[127.0.0.1 192.168.83.77 localhost minikube pause-851116]
	I0308 04:01:25.890457  951650 provision.go:177] copyRemoteCerts
	I0308 04:01:25.890518  951650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:01:25.890542  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:25.893333  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.893717  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:25.893756  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:25.893929  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:25.894193  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:25.894400  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:25.894576  951650 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/pause-851116/id_rsa Username:docker}
	I0308 04:01:25.981027  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 04:01:26.014638  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:01:26.053955  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:01:26.087626  951650 provision.go:87] duration metric: took 356.114396ms to configureAuth
	I0308 04:01:26.087668  951650 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:01:26.087953  951650 config.go:182] Loaded profile config "pause-851116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:01:26.088061  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:26.091110  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:26.091500  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:26.091531  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:26.091740  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:26.091932  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:26.092137  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:26.092346  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:26.092578  951650 main.go:141] libmachine: Using SSH client type: native
	I0308 04:01:26.092801  951650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0308 04:01:26.092824  951650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:01:31.738072  951650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:01:31.738111  951650 machine.go:97] duration metric: took 6.375548912s to provisionDockerMachine
	I0308 04:01:31.738127  951650 start.go:293] postStartSetup for "pause-851116" (driver="kvm2")
	I0308 04:01:31.738141  951650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:01:31.738165  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:31.738591  951650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:01:31.738628  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:31.741901  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:31.742384  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:31.742416  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:31.742575  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:31.742812  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:31.743007  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:31.743219  951650 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/pause-851116/id_rsa Username:docker}
	I0308 04:01:31.855765  951650 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:01:31.861379  951650 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:01:31.861412  951650 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:01:31.861475  951650 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:01:31.861566  951650 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:01:31.861723  951650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:01:31.873555  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:01:31.905313  951650 start.go:296] duration metric: took 167.166863ms for postStartSetup
	I0308 04:01:31.905358  951650 fix.go:56] duration metric: took 6.57059078s for fixHost
	I0308 04:01:31.905380  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:31.908342  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:31.908760  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:31.908798  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:31.908972  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:31.909229  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:31.909426  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:31.909627  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:31.909820  951650 main.go:141] libmachine: Using SSH client type: native
	I0308 04:01:31.910047  951650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.83.77 22 <nil> <nil>}
	I0308 04:01:31.910065  951650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0308 04:01:32.028276  951650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870492.015531777
	
	I0308 04:01:32.028315  951650 fix.go:216] guest clock: 1709870492.015531777
	I0308 04:01:32.028328  951650 fix.go:229] Guest: 2024-03-08 04:01:32.015531777 +0000 UTC Remote: 2024-03-08 04:01:31.905362546 +0000 UTC m=+16.089609289 (delta=110.169231ms)
	I0308 04:01:32.028391  951650 fix.go:200] guest clock delta is within tolerance: 110.169231ms
	I0308 04:01:32.028412  951650 start.go:83] releasing machines lock for "pause-851116", held for 6.693668168s
	I0308 04:01:32.028448  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:32.028771  951650 main.go:141] libmachine: (pause-851116) Calling .GetIP
	I0308 04:01:32.032032  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.032436  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:32.032466  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.032609  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:32.033331  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:32.033539  951650 main.go:141] libmachine: (pause-851116) Calling .DriverName
	I0308 04:01:32.033648  951650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:01:32.033698  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:32.033810  951650 ssh_runner.go:195] Run: cat /version.json
	I0308 04:01:32.033838  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHHostname
	I0308 04:01:32.037262  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.037756  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:32.037781  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.038084  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:32.038143  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.038258  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:32.038393  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:32.038520  951650 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/pause-851116/id_rsa Username:docker}
	I0308 04:01:32.038643  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:32.038667  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:32.038840  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHPort
	I0308 04:01:32.038992  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHKeyPath
	I0308 04:01:32.039180  951650 main.go:141] libmachine: (pause-851116) Calling .GetSSHUsername
	I0308 04:01:32.039338  951650 sshutil.go:53] new ssh client: &{IP:192.168.83.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/pause-851116/id_rsa Username:docker}
	I0308 04:01:32.155309  951650 ssh_runner.go:195] Run: systemctl --version
	I0308 04:01:32.163933  951650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:01:32.352361  951650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:01:32.363972  951650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:01:32.364049  951650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:01:32.376925  951650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 04:01:32.376951  951650 start.go:494] detecting cgroup driver to use...
	I0308 04:01:32.377024  951650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:01:32.398658  951650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:01:32.413296  951650 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:01:32.413364  951650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:01:32.427586  951650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:01:32.441700  951650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:01:32.593440  951650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:01:32.740081  951650 docker.go:233] disabling docker service ...
	I0308 04:01:32.740182  951650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:01:32.759826  951650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:01:32.777411  951650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:01:32.940002  951650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:01:33.103303  951650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:01:33.122897  951650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:01:33.150609  951650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:01:33.150695  951650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:01:33.162758  951650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:01:33.162847  951650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:01:33.177935  951650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:01:33.194174  951650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:01:33.207768  951650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:01:33.224420  951650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:01:33.239277  951650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:01:33.255509  951650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:01:33.436240  951650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:01:35.556790  951650 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.120504889s)
	I0308 04:01:35.556832  951650 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:01:35.556895  951650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:01:35.569080  951650 start.go:562] Will wait 60s for crictl version
	I0308 04:01:35.569152  951650 ssh_runner.go:195] Run: which crictl
	I0308 04:01:35.579400  951650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:01:35.887473  951650 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:01:35.887563  951650 ssh_runner.go:195] Run: crio --version
	I0308 04:01:36.164317  951650 ssh_runner.go:195] Run: crio --version
	I0308 04:01:36.310243  951650 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:01:36.311490  951650 main.go:141] libmachine: (pause-851116) Calling .GetIP
	I0308 04:01:36.314723  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:36.315130  951650 main.go:141] libmachine: (pause-851116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:d5:fa", ip: ""} in network mk-pause-851116: {Iface:virbr2 ExpiryTime:2024-03-08 04:59:53 +0000 UTC Type:0 Mac:52:54:00:fc:d5:fa Iaid: IPaddr:192.168.83.77 Prefix:24 Hostname:pause-851116 Clientid:01:52:54:00:fc:d5:fa}
	I0308 04:01:36.315149  951650 main.go:141] libmachine: (pause-851116) DBG | domain pause-851116 has defined IP address 192.168.83.77 and MAC address 52:54:00:fc:d5:fa in network mk-pause-851116
	I0308 04:01:36.315486  951650 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0308 04:01:36.400801  951650 kubeadm.go:877] updating cluster {Name:pause-851116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-851116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:01:36.401011  951650 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:01:36.401086  951650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:01:36.633158  951650 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:01:36.633184  951650 crio.go:415] Images already preloaded, skipping extraction
	I0308 04:01:36.633245  951650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:01:36.713999  951650 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:01:36.714032  951650 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:01:36.714043  951650 kubeadm.go:928] updating node { 192.168.83.77 8443 v1.28.4 crio true true} ...
	I0308 04:01:36.714202  951650 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-851116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-851116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:01:36.714302  951650 ssh_runner.go:195] Run: crio config
	I0308 04:01:36.880682  951650 cni.go:84] Creating CNI manager for ""
	I0308 04:01:36.880755  951650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:01:36.880783  951650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:01:36.880825  951650 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.77 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-851116 NodeName:pause-851116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:01:36.881012  951650 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-851116"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:01:36.881102  951650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:01:36.955071  951650 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:01:36.955218  951650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:01:37.027953  951650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0308 04:01:37.058785  951650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:01:37.091373  951650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 04:01:37.124889  951650 ssh_runner.go:195] Run: grep 192.168.83.77	control-plane.minikube.internal$ /etc/hosts
	I0308 04:01:37.131415  951650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:01:37.345690  951650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:01:37.366467  951650 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116 for IP: 192.168.83.77
	I0308 04:01:37.366498  951650 certs.go:194] generating shared ca certs ...
	I0308 04:01:37.366524  951650 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:01:37.366718  951650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:01:37.366777  951650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:01:37.366791  951650 certs.go:256] generating profile certs ...
	I0308 04:01:37.366915  951650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/client.key
	I0308 04:01:37.366998  951650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/apiserver.key.e78074ed
	I0308 04:01:37.367049  951650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/proxy-client.key
	I0308 04:01:37.367213  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:01:37.367260  951650 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:01:37.367274  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:01:37.367312  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:01:37.367352  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:01:37.367381  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:01:37.367447  951650 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:01:37.368477  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:01:37.398618  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:01:37.455859  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:01:37.498025  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:01:37.532218  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 04:01:37.571665  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:01:37.602519  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:01:37.637006  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/pause-851116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:01:37.679652  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:01:37.720654  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:01:37.750559  951650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:01:37.784696  951650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:01:37.821290  951650 ssh_runner.go:195] Run: openssl version
	I0308 04:01:37.831711  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:01:37.847503  951650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:01:37.857129  951650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:01:37.857194  951650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:01:37.867816  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:01:37.882945  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:01:37.899133  951650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:01:37.908704  951650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:01:37.908772  951650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:01:37.943532  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:01:37.985269  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:01:38.004461  951650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:01:38.009909  951650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:01:38.009970  951650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:01:38.018504  951650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:01:38.030824  951650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:01:38.036700  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:01:38.046663  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:01:38.057097  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:01:38.066245  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:01:38.073095  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:01:38.082674  951650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:01:38.089564  951650 kubeadm.go:391] StartCluster: {Name:pause-851116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:pause-851116 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:01:38.089749  951650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:01:38.089815  951650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:01:38.177642  951650 cri.go:89] found id: "139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa"
	I0308 04:01:38.177669  951650 cri.go:89] found id: "480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3"
	I0308 04:01:38.177675  951650 cri.go:89] found id: "9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d"
	I0308 04:01:38.177680  951650 cri.go:89] found id: "1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237"
	I0308 04:01:38.177684  951650 cri.go:89] found id: "9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a"
	I0308 04:01:38.177689  951650 cri.go:89] found id: "1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7"
	I0308 04:01:38.177692  951650 cri.go:89] found id: "481fb1baa7d56ebe4c52b4ae589fb9d1e84b307cf619c3118881ea6410379cfc"
	I0308 04:01:38.177696  951650 cri.go:89] found id: "f23f768249fc022ae9f22a6c45ce3b1f6c157ad946a4bddbeaf6232eb146cafb"
	I0308 04:01:38.177700  951650 cri.go:89] found id: "1a1d23a192ebb691fb593175b2dad635f260759c8d9c416603b656441542f0c1"
	I0308 04:01:38.177708  951650 cri.go:89] found id: "043a4153fd933d81b0e53c1f80bcf327155276722936dad09199386a373059e5"
	I0308 04:01:38.177711  951650 cri.go:89] found id: "03763121ac92aa70e87c39ff650a4eb06bc91b5ee3937cdaf82851587ce4f200"
	I0308 04:01:38.177715  951650 cri.go:89] found id: "ea951999b834dd11f63825df19ff380907d0285c8d1eebd9b6583cb8ebdd81fd"
	I0308 04:01:38.177719  951650 cri.go:89] found id: ""
	I0308 04:01:38.177779  951650 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-851116 -n pause-851116
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-851116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-851116 logs -n 25: (1.517463021s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p offline-crio-290342             | offline-crio-290342       | jenkins | v1.32.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048                 |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-306267          | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-412346          | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-412346          | running-upgrade-412346    | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:00 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-290342             | offline-crio-290342       | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 03:59 UTC |
	| start   | -p pause-851116 --memory=2048      | pause-851116              | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:01 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-306267 stop        | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 03:59 UTC |
	| start   | -p stopped-upgrade-306267          | stopped-upgrade-306267    | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:00 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-412346          | running-upgrade-412346    | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:00 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:01 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-306267          | stopped-upgrade-306267    | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:00 UTC |
	| start   | -p force-systemd-flag-786598       | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:01 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-851116                    | pause-851116              | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:02 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:02 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-786598 ssh cat  | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-786598       | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p cert-expiration-401581          | cert-expiration-401581    | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-219954       | kubernetes-upgrade-219954 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p kubernetes-upgrade-219954       | kubernetes-upgrade-219954 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-995759 sudo        | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC | 08 Mar 24 04:02 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:02:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:02:10.034343  952782 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:02:10.034576  952782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:02:10.034579  952782 out.go:304] Setting ErrFile to fd 2...
	I0308 04:02:10.034582  952782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:02:10.034771  952782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:02:10.035312  952782 out.go:298] Setting JSON to false
	I0308 04:02:10.036382  952782 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27856,"bootTime":1709842674,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:02:10.036444  952782 start.go:139] virtualization: kvm guest
	I0308 04:02:10.038665  952782 out.go:177] * [NoKubernetes-995759] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:02:10.039986  952782 notify.go:220] Checking for updates...
	I0308 04:02:10.040004  952782 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:02:10.041283  952782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:02:10.042450  952782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:02:10.043600  952782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:02:10.044709  952782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:02:10.045766  952782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:02:10.047411  952782 config.go:182] Loaded profile config "NoKubernetes-995759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0308 04:02:10.047975  952782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:02:10.048026  952782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:02:10.063843  952782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0308 04:02:10.064282  952782 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:02:10.064850  952782 main.go:141] libmachine: Using API Version  1
	I0308 04:02:10.064865  952782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:02:10.065210  952782 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:02:10.065432  952782 main.go:141] libmachine: (NoKubernetes-995759) Calling .DriverName
	I0308 04:02:10.065658  952782 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0308 04:02:10.065675  952782 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:02:10.065940  952782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:02:10.065969  952782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:02:10.080320  952782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0308 04:02:10.080731  952782 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:02:10.081228  952782 main.go:141] libmachine: Using API Version  1
	I0308 04:02:10.081244  952782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:02:10.081653  952782 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:02:10.081821  952782 main.go:141] libmachine: (NoKubernetes-995759) Calling .DriverName
	I0308 04:02:10.116903  952782 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:02:10.118233  952782 start.go:297] selected driver: kvm2
	I0308 04:02:10.118240  952782 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-995759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-995759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:02:10.118333  952782 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:02:10.118685  952782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:02:10.118772  952782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:02:10.133810  952782 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:02:10.134644  952782 cni.go:84] Creating CNI manager for ""
	I0308 04:02:10.134660  952782 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:02:10.134718  952782 start.go:340] cluster config:
	{Name:NoKubernetes-995759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-995759 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:02:10.134813  952782 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:02:10.136409  952782 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-995759
	I0308 04:02:06.123891  951650 pod_ready.go:92] pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:06.123919  951650 pod_ready.go:81] duration metric: took 508.843052ms for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:06.123931  951650 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:08.132719  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:10.132787  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:10.187253  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:10.187874  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:10.187891  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:10.187823  952500 retry.go:31] will retry after 1.124544025s: waiting for machine to come up
	I0308 04:02:11.314586  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:11.315151  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:11.315175  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:11.315108  952500 retry.go:31] will retry after 1.098017703s: waiting for machine to come up
	I0308 04:02:12.414523  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:12.415050  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:12.415075  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:12.414996  952500 retry.go:31] will retry after 1.668632531s: waiting for machine to come up
	I0308 04:02:14.084972  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:14.085584  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:14.085606  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:14.085515  952500 retry.go:31] will retry after 1.604300431s: waiting for machine to come up
	I0308 04:02:10.137662  952782 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0308 04:02:10.162279  952782 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0308 04:02:10.162412  952782 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/NoKubernetes-995759/config.json ...
	I0308 04:02:10.162627  952782 start.go:360] acquireMachinesLock for NoKubernetes-995759: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:02:12.631496  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:13.631726  951650 pod_ready.go:92] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:13.631753  951650 pod_ready.go:81] duration metric: took 7.507813921s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.631762  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.637217  951650 pod_ready.go:92] pod "kube-apiserver-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:13.637239  951650 pod_ready.go:81] duration metric: took 5.471327ms for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.637247  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.145944  951650 pod_ready.go:92] pod "kube-controller-manager-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.145980  951650 pod_ready.go:81] duration metric: took 1.508724505s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.145995  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.151989  951650 pod_ready.go:92] pod "kube-proxy-wbk4h" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.152022  951650 pod_ready.go:81] duration metric: took 6.018942ms for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.152033  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.157883  951650 pod_ready.go:92] pod "kube-scheduler-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.157906  951650 pod_ready.go:81] duration metric: took 5.86108ms for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.157916  951650 pod_ready.go:38] duration metric: took 9.558488441s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:15.157941  951650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:02:15.173319  951650 ops.go:34] apiserver oom_adj: -16
	I0308 04:02:15.173340  951650 kubeadm.go:591] duration metric: took 36.908164026s to restartPrimaryControlPlane
	I0308 04:02:15.173350  951650 kubeadm.go:393] duration metric: took 37.083797167s to StartCluster
	I0308 04:02:15.173372  951650 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:02:15.173453  951650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:02:15.174393  951650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:02:15.174695  951650 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:02:15.177328  951650 out.go:177] * Verifying Kubernetes components...
	I0308 04:02:15.174881  951650 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:02:15.174984  951650 config.go:182] Loaded profile config "pause-851116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:02:15.178871  951650 out.go:177] * Enabled addons: 
	I0308 04:02:15.181683  951650 addons.go:505] duration metric: took 6.802441ms for enable addons: enabled=[]
	I0308 04:02:15.180408  951650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:02:15.344795  951650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:02:15.362686  951650 node_ready.go:35] waiting up to 6m0s for node "pause-851116" to be "Ready" ...
	I0308 04:02:15.366950  951650 node_ready.go:49] node "pause-851116" has status "Ready":"True"
	I0308 04:02:15.366971  951650 node_ready.go:38] duration metric: took 4.243323ms for node "pause-851116" to be "Ready" ...
	I0308 04:02:15.366981  951650 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:15.373384  951650 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.629528  951650 pod_ready.go:92] pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.629579  951650 pod_ready.go:81] duration metric: took 256.172602ms for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.629594  951650 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.033535  951650 pod_ready.go:92] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.033575  951650 pod_ready.go:81] duration metric: took 403.971348ms for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.033590  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.430827  951650 pod_ready.go:92] pod "kube-apiserver-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.430860  951650 pod_ready.go:81] duration metric: took 397.25973ms for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.430876  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.828414  951650 pod_ready.go:92] pod "kube-controller-manager-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.828450  951650 pod_ready.go:81] duration metric: took 397.564618ms for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.828464  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.229295  951650 pod_ready.go:92] pod "kube-proxy-wbk4h" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:17.229325  951650 pod_ready.go:81] duration metric: took 400.852537ms for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.229338  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.628615  951650 pod_ready.go:92] pod "kube-scheduler-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:17.628642  951650 pod_ready.go:81] duration metric: took 399.296089ms for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.628651  951650 pod_ready.go:38] duration metric: took 2.261657544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:17.628667  951650 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:02:17.628725  951650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:02:17.644197  951650 api_server.go:72] duration metric: took 2.469456847s to wait for apiserver process to appear ...
	I0308 04:02:17.644227  951650 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:02:17.644251  951650 api_server.go:253] Checking apiserver healthz at https://192.168.83.77:8443/healthz ...
	I0308 04:02:17.649378  951650 api_server.go:279] https://192.168.83.77:8443/healthz returned 200:
	ok
	I0308 04:02:17.650674  951650 api_server.go:141] control plane version: v1.28.4
	I0308 04:02:17.650701  951650 api_server.go:131] duration metric: took 6.464598ms to wait for apiserver health ...
	I0308 04:02:17.650720  951650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:02:17.832023  951650 system_pods.go:59] 6 kube-system pods found
	I0308 04:02:17.832067  951650 system_pods.go:61] "coredns-5dd5756b68-2fsb6" [3bf44768-d86f-46b4-b0d1-d164f794e9ba] Running
	I0308 04:02:17.832074  951650 system_pods.go:61] "etcd-pause-851116" [6dd9d85c-344b-4354-81f4-42e72ae1d443] Running
	I0308 04:02:17.832079  951650 system_pods.go:61] "kube-apiserver-pause-851116" [c83fa9ab-6cde-46fd-a4cc-e6081f4e1634] Running
	I0308 04:02:17.832084  951650 system_pods.go:61] "kube-controller-manager-pause-851116" [c963e21a-f2ad-4a2d-a434-f0c5435d5c15] Running
	I0308 04:02:17.832088  951650 system_pods.go:61] "kube-proxy-wbk4h" [e29ff4ab-c8ac-470a-a28f-ebc871a56d1e] Running
	I0308 04:02:17.832092  951650 system_pods.go:61] "kube-scheduler-pause-851116" [7419809a-3421-4e63-abc5-3c6a1b0e641c] Running
	I0308 04:02:17.832108  951650 system_pods.go:74] duration metric: took 181.372536ms to wait for pod list to return data ...
	I0308 04:02:17.832119  951650 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:02:18.028968  951650 default_sa.go:45] found service account: "default"
	I0308 04:02:18.029001  951650 default_sa.go:55] duration metric: took 196.871761ms for default service account to be created ...
	I0308 04:02:18.029011  951650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:02:18.232859  951650 system_pods.go:86] 6 kube-system pods found
	I0308 04:02:18.232903  951650 system_pods.go:89] "coredns-5dd5756b68-2fsb6" [3bf44768-d86f-46b4-b0d1-d164f794e9ba] Running
	I0308 04:02:18.232911  951650 system_pods.go:89] "etcd-pause-851116" [6dd9d85c-344b-4354-81f4-42e72ae1d443] Running
	I0308 04:02:18.232917  951650 system_pods.go:89] "kube-apiserver-pause-851116" [c83fa9ab-6cde-46fd-a4cc-e6081f4e1634] Running
	I0308 04:02:18.232923  951650 system_pods.go:89] "kube-controller-manager-pause-851116" [c963e21a-f2ad-4a2d-a434-f0c5435d5c15] Running
	I0308 04:02:18.232929  951650 system_pods.go:89] "kube-proxy-wbk4h" [e29ff4ab-c8ac-470a-a28f-ebc871a56d1e] Running
	I0308 04:02:18.232934  951650 system_pods.go:89] "kube-scheduler-pause-851116" [7419809a-3421-4e63-abc5-3c6a1b0e641c] Running
	I0308 04:02:18.232944  951650 system_pods.go:126] duration metric: took 203.925067ms to wait for k8s-apps to be running ...
	I0308 04:02:18.232954  951650 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:02:18.233036  951650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:02:18.248074  951650 system_svc.go:56] duration metric: took 15.106878ms WaitForService to wait for kubelet
	I0308 04:02:18.248112  951650 kubeadm.go:576] duration metric: took 3.073378532s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:02:18.248141  951650 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:02:18.429809  951650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:02:18.429834  951650 node_conditions.go:123] node cpu capacity is 2
	I0308 04:02:18.429848  951650 node_conditions.go:105] duration metric: took 181.700492ms to run NodePressure ...
	I0308 04:02:18.429864  951650 start.go:240] waiting for startup goroutines ...
	I0308 04:02:18.429875  951650 start.go:245] waiting for cluster config update ...
	I0308 04:02:18.429885  951650 start.go:254] writing updated cluster config ...
	I0308 04:02:18.430226  951650 ssh_runner.go:195] Run: rm -f paused
	I0308 04:02:18.488446  951650 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:02:18.490554  951650 out.go:177] * Done! kubectl is now configured to use "pause-851116" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.223742244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870539223710569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82aa20e6-629f-4297-8c97-b1937da34baf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.224352558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=262f2a60-16e9-491e-b3dc-5b1681f71833 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.224407853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=262f2a60-16e9-491e-b3dc-5b1681f71833 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.224675257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=262f2a60-16e9-491e-b3dc-5b1681f71833 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.276824799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c0c11c1-689b-4dc1-a6f3-6e388fcd0ae7 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.277071517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c0c11c1-689b-4dc1-a6f3-6e388fcd0ae7 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.278803534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=429b7c43-e2f0-49f3-8960-b495f7677546 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.279534644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870539279510891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=429b7c43-e2f0-49f3-8960-b495f7677546 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.280519562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=996a5546-9399-4314-b6de-9ed8ae3e1cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.280572916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=996a5546-9399-4314-b6de-9ed8ae3e1cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.280822195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=996a5546-9399-4314-b6de-9ed8ae3e1cbc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.339173511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0a764eb-02a8-427d-b692-252293e40c6a name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.339244829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0a764eb-02a8-427d-b692-252293e40c6a name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.341141134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65ce3dfe-c917-4bcf-94b6-39be8ba13067 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.341883586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870539341858281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65ce3dfe-c917-4bcf-94b6-39be8ba13067 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.343668078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4b88b7a-63fa-455b-a02e-96cde482ff0b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.343792212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4b88b7a-63fa-455b-a02e-96cde482ff0b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.344248786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4b88b7a-63fa-455b-a02e-96cde482ff0b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.387785034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfa5c4a5-5a33-420d-9930-600fe5f71dce name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.387872497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfa5c4a5-5a33-420d-9930-600fe5f71dce name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.389316180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0061015-25a1-42c1-b361-f033a56f3998 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.389803369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870539389780691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0061015-25a1-42c1-b361-f033a56f3998 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.390388598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef2893e8-6b6e-4fd6-8e08-4dff9d0a4e2d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.390470193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef2893e8-6b6e-4fd6-8e08-4dff9d0a4e2d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:19 pause-851116 crio[2298]: time="2024-03-08 04:02:19.390726430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef2893e8-6b6e-4fd6-8e08-4dff9d0a4e2d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ddcdf4c43dfd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 seconds ago      Running             kube-proxy                2                   48e1395f0fe7f       kube-proxy-wbk4h
	956c3c7f5596e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 seconds ago      Running             coredns                   2                   dc8ce2e49bbbb       coredns-5dd5756b68-2fsb6
	dfb6a2dd51f3e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   19 seconds ago      Running             etcd                      2                   a8386b75b008d       etcd-pause-851116
	ded9362be3954       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   19 seconds ago      Running             kube-controller-manager   2                   22008928f2434       kube-controller-manager-pause-851116
	6f1a9d669b6c1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   19 seconds ago      Running             kube-apiserver            2                   2f9176baaea38       kube-apiserver-pause-851116
	707066bebea8f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   19 seconds ago      Running             kube-scheduler            2                   d5f2bfc8e42b9       kube-scheduler-pause-851116
	139651010c6a1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   42 seconds ago      Exited              coredns                   1                   dc8ce2e49bbbb       coredns-5dd5756b68-2fsb6
	480c64bb66896       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   43 seconds ago      Exited              kube-apiserver            1                   2f9176baaea38       kube-apiserver-pause-851116
	9b285b709a8d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   43 seconds ago      Exited              etcd                      1                   a8386b75b008d       etcd-pause-851116
	1d92a46ebae62       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   43 seconds ago      Exited              kube-scheduler            1                   d5f2bfc8e42b9       kube-scheduler-pause-851116
	9d8e855a1dd49       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   43 seconds ago      Exited              kube-controller-manager   1                   22008928f2434       kube-controller-manager-pause-851116
	1c2505a6294ca       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   43 seconds ago      Exited              kube-proxy                1                   48e1395f0fe7f       kube-proxy-wbk4h
	
	
	==> coredns [139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34095 - 4764 "HINFO IN 7577450993068145099.5917319491977946482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063747664s
	
	
	==> coredns [956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56306 - 25975 "HINFO IN 7224382692144182061.814941936348778439. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011050357s
	
	
	==> describe nodes <==
	Name:               pause-851116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-851116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=pause-851116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_00_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:00:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-851116
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.77
	  Hostname:    pause-851116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b3d68dd9f834ada8d595888f9f0f884
	  System UUID:                3b3d68dd-9f83-4ada-8d59-5888f9f0f884
	  Boot ID:                    e69c68d2-80a9-45b5-9e3a-a8b4d04836ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2fsb6                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     106s
	  kube-system                 etcd-pause-851116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-pause-851116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-pause-851116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-wbk4h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-851116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 38s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeReady                117s                 kubelet          Node pause-851116 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node pause-851116 event: Registered Node pause-851116 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node pause-851116 event: Registered Node pause-851116 in Controller
	
	
	==> dmesg <==
	[  +0.064426] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072896] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.199988] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.161125] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.305025] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +5.650465] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.063277] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.720762] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.542086] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.810513] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.079598] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.351350] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.165467] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.644924] kauditd_printk_skb: 80 callbacks suppressed
	[Mar 8 04:01] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.152994] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.176820] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.164080] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.325007] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +3.898531] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +3.953689] kauditd_printk_skb: 191 callbacks suppressed
	[ +18.228401] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[Mar 8 04:02] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.217336] systemd-fstab-generator[3631]: Ignoring "noauto" option for root device
	[  +0.085255] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d] <==
	{"level":"info","ts":"2024-03-08T04:01:37.50915Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:38.998401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.9985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.998512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.998521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:39.005362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-851116 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:01:39.005568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:01:39.007136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:01:39.007159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:01:39.007178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:01:39.007334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:01:39.008204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2024-03-08T04:01:57.511055Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-08T04:01:57.511215Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-851116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	{"level":"warn","ts":"2024-03-08T04:01:57.5113Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.511352Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.513325Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.513373Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T04:01:57.51347Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a3b04ba9ccd2eedd","current-leader-member-id":"a3b04ba9ccd2eedd"}
	{"level":"info","ts":"2024-03-08T04:01:57.517139Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:57.517296Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:57.517321Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-851116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	
	
	==> etcd [dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64] <==
	{"level":"info","ts":"2024-03-08T04:02:01.081983Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:02:01.082104Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:02:01.083116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd switched to configuration voters=(11795010616741261021)"}
	{"level":"info","ts":"2024-03-08T04:02:01.08514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","added-peer-id":"a3b04ba9ccd2eedd","added-peer-peer-urls":["https://192.168.83.77:2380"]}
	{"level":"info","ts":"2024-03-08T04:02:01.087133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:02:01.087338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:02:01.100384Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:02:01.10058Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a3b04ba9ccd2eedd","initial-advertise-peer-urls":["https://192.168.83.77:2380"],"listen-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:02:01.100633Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:02:01.100673Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:02:01.100704Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:02:02.316597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.316844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.316885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.317037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.321861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:02:02.321862Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-851116 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:02:02.322308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:02:02.323194Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2024-03-08T04:02:02.323553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:02:02.323593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:02:02.324156Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:02:19 up 2 min,  0 users,  load average: 0.85, 0.36, 0.13
	Linux pause-851116 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3] <==
	I0308 04:01:47.453889       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0308 04:01:47.454048       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0308 04:01:47.454117       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0308 04:01:47.454181       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 04:01:47.454237       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0308 04:01:47.454267       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0308 04:01:47.454322       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 04:01:47.454369       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0308 04:01:47.454392       1 available_controller.go:439] Shutting down AvailableConditionController
	I0308 04:01:47.455402       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0308 04:01:47.456087       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:01:47.456181       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:01:47.456216       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0308 04:01:47.456289       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0308 04:01:47.456369       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0308 04:01:47.456467       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0308 04:01:47.456523       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 04:01:47.463367       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:01:47.467990       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0308 04:01:47.469045       1 controller.go:159] Shutting down quota evaluator
	I0308 04:01:47.469108       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469452       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469508       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469521       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469531       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619] <==
	I0308 04:02:03.619171       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:02:03.620744       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 04:02:03.620784       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 04:02:03.709296       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 04:02:03.709384       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 04:02:03.709486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 04:02:03.711442       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 04:02:03.716762       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 04:02:03.718325       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 04:02:03.718855       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 04:02:03.720863       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 04:02:03.724712       1 aggregator.go:166] initial CRD sync complete...
	I0308 04:02:03.724763       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 04:02:03.724770       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 04:02:03.724777       1 cache.go:39] Caches are synced for autoregister controller
	I0308 04:02:03.764268       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 04:02:04.615609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 04:02:05.312227       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.83.77]
	I0308 04:02:05.314045       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 04:02:05.322152       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 04:02:05.450768       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 04:02:05.465085       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 04:02:05.516748       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 04:02:05.551225       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 04:02:05.562556       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a] <==
	I0308 04:01:42.806982       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0308 04:01:42.809860       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0308 04:01:42.810155       1 disruption.go:433] "Sending events to api server."
	I0308 04:01:42.810211       1 disruption.go:444] "Starting disruption controller"
	I0308 04:01:42.810236       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0308 04:01:42.812783       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0308 04:01:42.813140       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0308 04:01:42.813194       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0308 04:01:42.816514       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0308 04:01:42.816776       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0308 04:01:42.816812       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0308 04:01:42.820696       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0308 04:01:42.820773       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0308 04:01:42.821083       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0308 04:01:42.834106       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0308 04:01:42.834194       1 namespace_controller.go:197] "Starting namespace controller"
	I0308 04:01:42.834372       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0308 04:01:42.836531       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0308 04:01:42.836707       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0308 04:01:42.851959       1 shared_informer.go:318] Caches are synced for tokens
	W0308 04:01:52.841125       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:53.342319       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:54.343447       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:56.344623       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	E0308 04:01:56.345015       1 cidr_allocator.go:156] "Failed to list all nodes" err="Get \"https://192.168.83.77:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-controller-manager [ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7] <==
	I0308 04:02:16.394516       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0308 04:02:16.395009       1 shared_informer.go:318] Caches are synced for ephemeral
	I0308 04:02:16.395068       1 shared_informer.go:318] Caches are synced for stateful set
	I0308 04:02:16.395107       1 shared_informer.go:318] Caches are synced for cronjob
	I0308 04:02:16.395208       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0308 04:02:16.396724       1 shared_informer.go:318] Caches are synced for HPA
	I0308 04:02:16.398985       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0308 04:02:16.400037       1 shared_informer.go:318] Caches are synced for daemon sets
	I0308 04:02:16.402301       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0308 04:02:16.409025       1 shared_informer.go:318] Caches are synced for crt configmap
	I0308 04:02:16.409148       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0308 04:02:16.411061       1 shared_informer.go:318] Caches are synced for GC
	I0308 04:02:16.413130       1 shared_informer.go:318] Caches are synced for job
	I0308 04:02:16.433322       1 shared_informer.go:318] Caches are synced for namespace
	I0308 04:02:16.446140       1 shared_informer.go:318] Caches are synced for persistent volume
	I0308 04:02:16.459989       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0308 04:02:16.502800       1 shared_informer.go:318] Caches are synced for deployment
	I0308 04:02:16.505644       1 shared_informer.go:318] Caches are synced for disruption
	I0308 04:02:16.511649       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0308 04:02:16.511821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.418µs"
	I0308 04:02:16.516079       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 04:02:16.577412       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 04:02:16.941955       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 04:02:16.942121       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 04:02:16.952380       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7] <==
	I0308 04:01:37.870702       1 server_others.go:69] "Using iptables proxy"
	I0308 04:01:40.859454       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0308 04:01:40.986223       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:01:40.986251       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:01:40.994696       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:01:40.995098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:01:40.995553       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:01:40.996152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:01:40.997853       1 config.go:188] "Starting service config controller"
	I0308 04:01:40.998166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:01:40.998275       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:01:40.998306       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:01:40.999331       1 config.go:315] "Starting node config controller"
	I0308 04:01:40.999457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:01:41.098808       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 04:01:41.099037       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:01:41.100331       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a] <==
	I0308 04:02:05.279635       1 server_others.go:69] "Using iptables proxy"
	I0308 04:02:05.308351       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0308 04:02:05.377403       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:02:05.377425       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:02:05.380223       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:02:05.380275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:02:05.380408       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:02:05.380416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:02:05.381698       1 config.go:188] "Starting service config controller"
	I0308 04:02:05.381709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:02:05.381735       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:02:05.381738       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:02:05.382312       1 config.go:315] "Starting node config controller"
	I0308 04:02:05.382320       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:02:05.483113       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:02:05.483252       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:02:05.483271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237] <==
	I0308 04:01:38.536505       1 serving.go:348] Generated self-signed cert in-memory
	W0308 04:01:40.787681       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 04:01:40.787992       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:01:40.788073       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 04:01:40.788162       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 04:01:40.845347       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:01:40.846067       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:01:40.857031       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:01:40.857112       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:01:40.860612       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:01:40.861372       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:01:40.959261       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:01:57.649610       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 04:01:57.649724       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0308 04:01:57.649859       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93] <==
	I0308 04:02:01.448427       1 serving.go:348] Generated self-signed cert in-memory
	W0308 04:02:03.659426       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 04:02:03.659510       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:02:03.659544       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 04:02:03.659567       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 04:02:03.727387       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:02:03.727439       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:02:03.734469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:02:03.734547       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:02:03.737351       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:02:03.737435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:02:03.836134       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.073720    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e56080b9b146d8da78087e166218e6d4-usr-share-ca-certificates\") pod \"kube-apiserver-pause-851116\" (UID: \"e56080b9b146d8da78087e166218e6d4\") " pod="kube-system/kube-apiserver-pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.073740    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a481251b024192a7ac7779eea579bc04-flexvolume-dir\") pod \"kube-controller-manager-pause-851116\" (UID: \"a481251b024192a7ac7779eea579bc04\") " pod="kube-system/kube-controller-manager-pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.270788    3187 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-851116?timeout=10s\": dial tcp 192.168.83.77:8443: connect: connection refused" interval="800ms"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.274993    3187 scope.go:117] "RemoveContainer" containerID="480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.276322    3187 scope.go:117] "RemoveContainer" containerID="9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.279036    3187 scope.go:117] "RemoveContainer" containerID="9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.279682    3187 scope.go:117] "RemoveContainer" containerID="1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.367042    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.367760    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.77:8443: connect: connection refused" node="pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: W0308 04:02:00.777615    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.777676    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Mar 08 04:02:01 pause-851116 kubelet[3187]: I0308 04:02:01.169528    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.764711    3187 kubelet_node_status.go:108] "Node was previously registered" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.765322    3187 kubelet_node_status.go:73] "Successfully registered node" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.767553    3187 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.773763    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: E0308 04:02:03.837163    3187 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851116\" already exists" pod="kube-system/kube-apiserver-pause-851116"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.643604    3187 apiserver.go:52] "Watching apiserver"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.648775    3187 topology_manager.go:215] "Topology Admit Handler" podUID="3bf44768-d86f-46b4-b0d1-d164f794e9ba" podNamespace="kube-system" podName="coredns-5dd5756b68-2fsb6"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.649003    3187 topology_manager.go:215] "Topology Admit Handler" podUID="e29ff4ab-c8ac-470a-a28f-ebc871a56d1e" podNamespace="kube-system" podName="kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.660105    3187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.699844    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29ff4ab-c8ac-470a-a28f-ebc871a56d1e-lib-modules\") pod \"kube-proxy-wbk4h\" (UID: \"e29ff4ab-c8ac-470a-a28f-ebc871a56d1e\") " pod="kube-system/kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.700029    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29ff4ab-c8ac-470a-a28f-ebc871a56d1e-xtables-lock\") pod \"kube-proxy-wbk4h\" (UID: \"e29ff4ab-c8ac-470a-a28f-ebc871a56d1e\") " pod="kube-system/kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.949746    3187 scope.go:117] "RemoveContainer" containerID="1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.950951    3187 scope.go:117] "RemoveContainer" containerID="139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-851116 -n pause-851116
helpers_test.go:261: (dbg) Run:  kubectl --context pause-851116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-851116 -n pause-851116
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-851116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-851116 logs -n 25: (1.339114547s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p offline-crio-290342             | offline-crio-290342       | jenkins | v1.32.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048                 |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-306267          | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-412346          | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:57 UTC | 08 Mar 24 03:59 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-412346          | running-upgrade-412346    | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:00 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-290342             | offline-crio-290342       | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 03:59 UTC |
	| start   | -p pause-851116 --memory=2048      | pause-851116              | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:01 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-306267 stop        | minikube                  | jenkins | v1.26.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 03:59 UTC |
	| start   | -p stopped-upgrade-306267          | stopped-upgrade-306267    | jenkins | v1.32.0 | 08 Mar 24 03:59 UTC | 08 Mar 24 04:00 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-412346          | running-upgrade-412346    | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:00 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:01 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-306267          | stopped-upgrade-306267    | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:00 UTC |
	| start   | -p force-systemd-flag-786598       | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:00 UTC | 08 Mar 24 04:01 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-851116                    | pause-851116              | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:02 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:02 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-786598 ssh cat  | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-786598       | force-systemd-flag-786598 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p cert-expiration-401581          | cert-expiration-401581    | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-219954       | kubernetes-upgrade-219954 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC | 08 Mar 24 04:01 UTC |
	| start   | -p kubernetes-upgrade-219954       | kubernetes-upgrade-219954 | jenkins | v1.32.0 | 08 Mar 24 04:01 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-995759 sudo        | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC | 08 Mar 24 04:02 UTC |
	| start   | -p NoKubernetes-995759             | NoKubernetes-995759       | jenkins | v1.32.0 | 08 Mar 24 04:02 UTC |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:02:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:02:10.034343  952782 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:02:10.034576  952782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:02:10.034579  952782 out.go:304] Setting ErrFile to fd 2...
	I0308 04:02:10.034582  952782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:02:10.034771  952782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:02:10.035312  952782 out.go:298] Setting JSON to false
	I0308 04:02:10.036382  952782 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27856,"bootTime":1709842674,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:02:10.036444  952782 start.go:139] virtualization: kvm guest
	I0308 04:02:10.038665  952782 out.go:177] * [NoKubernetes-995759] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:02:10.039986  952782 notify.go:220] Checking for updates...
	I0308 04:02:10.040004  952782 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:02:10.041283  952782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:02:10.042450  952782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:02:10.043600  952782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:02:10.044709  952782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:02:10.045766  952782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:02:10.047411  952782 config.go:182] Loaded profile config "NoKubernetes-995759": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0308 04:02:10.047975  952782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:02:10.048026  952782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:02:10.063843  952782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0308 04:02:10.064282  952782 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:02:10.064850  952782 main.go:141] libmachine: Using API Version  1
	I0308 04:02:10.064865  952782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:02:10.065210  952782 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:02:10.065432  952782 main.go:141] libmachine: (NoKubernetes-995759) Calling .DriverName
	I0308 04:02:10.065658  952782 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0308 04:02:10.065675  952782 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:02:10.065940  952782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:02:10.065969  952782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:02:10.080320  952782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0308 04:02:10.080731  952782 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:02:10.081228  952782 main.go:141] libmachine: Using API Version  1
	I0308 04:02:10.081244  952782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:02:10.081653  952782 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:02:10.081821  952782 main.go:141] libmachine: (NoKubernetes-995759) Calling .DriverName
	I0308 04:02:10.116903  952782 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:02:10.118233  952782 start.go:297] selected driver: kvm2
	I0308 04:02:10.118240  952782 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-995759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-995759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:02:10.118333  952782 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:02:10.118685  952782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:02:10.118772  952782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:02:10.133810  952782 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:02:10.134644  952782 cni.go:84] Creating CNI manager for ""
	I0308 04:02:10.134660  952782 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:02:10.134718  952782 start.go:340] cluster config:
	{Name:NoKubernetes-995759 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-995759 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.176 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:02:10.134813  952782 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:02:10.136409  952782 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-995759
	I0308 04:02:06.123891  951650 pod_ready.go:92] pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:06.123919  951650 pod_ready.go:81] duration metric: took 508.843052ms for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:06.123931  951650 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:08.132719  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:10.132787  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:10.187253  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:10.187874  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:10.187891  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:10.187823  952500 retry.go:31] will retry after 1.124544025s: waiting for machine to come up
	I0308 04:02:11.314586  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:11.315151  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:11.315175  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:11.315108  952500 retry.go:31] will retry after 1.098017703s: waiting for machine to come up
	I0308 04:02:12.414523  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:12.415050  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:12.415075  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:12.414996  952500 retry.go:31] will retry after 1.668632531s: waiting for machine to come up
	I0308 04:02:14.084972  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:14.085584  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:14.085606  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:14.085515  952500 retry.go:31] will retry after 1.604300431s: waiting for machine to come up
	I0308 04:02:10.137662  952782 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0308 04:02:10.162279  952782 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0308 04:02:10.162412  952782 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/NoKubernetes-995759/config.json ...
	I0308 04:02:10.162627  952782 start.go:360] acquireMachinesLock for NoKubernetes-995759: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:02:12.631496  951650 pod_ready.go:102] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"False"
	I0308 04:02:13.631726  951650 pod_ready.go:92] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:13.631753  951650 pod_ready.go:81] duration metric: took 7.507813921s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.631762  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.637217  951650 pod_ready.go:92] pod "kube-apiserver-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:13.637239  951650 pod_ready.go:81] duration metric: took 5.471327ms for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:13.637247  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.145944  951650 pod_ready.go:92] pod "kube-controller-manager-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.145980  951650 pod_ready.go:81] duration metric: took 1.508724505s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.145995  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.151989  951650 pod_ready.go:92] pod "kube-proxy-wbk4h" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.152022  951650 pod_ready.go:81] duration metric: took 6.018942ms for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.152033  951650 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.157883  951650 pod_ready.go:92] pod "kube-scheduler-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.157906  951650 pod_ready.go:81] duration metric: took 5.86108ms for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.157916  951650 pod_ready.go:38] duration metric: took 9.558488441s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:15.157941  951650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:02:15.173319  951650 ops.go:34] apiserver oom_adj: -16
	I0308 04:02:15.173340  951650 kubeadm.go:591] duration metric: took 36.908164026s to restartPrimaryControlPlane
	I0308 04:02:15.173350  951650 kubeadm.go:393] duration metric: took 37.083797167s to StartCluster
	I0308 04:02:15.173372  951650 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:02:15.173453  951650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:02:15.174393  951650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:02:15.174695  951650 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.83.77 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:02:15.177328  951650 out.go:177] * Verifying Kubernetes components...
	I0308 04:02:15.174881  951650 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:02:15.174984  951650 config.go:182] Loaded profile config "pause-851116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:02:15.178871  951650 out.go:177] * Enabled addons: 
	I0308 04:02:15.181683  951650 addons.go:505] duration metric: took 6.802441ms for enable addons: enabled=[]
	I0308 04:02:15.180408  951650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:02:15.344795  951650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:02:15.362686  951650 node_ready.go:35] waiting up to 6m0s for node "pause-851116" to be "Ready" ...
	I0308 04:02:15.366950  951650 node_ready.go:49] node "pause-851116" has status "Ready":"True"
	I0308 04:02:15.366971  951650 node_ready.go:38] duration metric: took 4.243323ms for node "pause-851116" to be "Ready" ...
	I0308 04:02:15.366981  951650 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:15.373384  951650 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.629528  951650 pod_ready.go:92] pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:15.629579  951650 pod_ready.go:81] duration metric: took 256.172602ms for pod "coredns-5dd5756b68-2fsb6" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:15.629594  951650 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.033535  951650 pod_ready.go:92] pod "etcd-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.033575  951650 pod_ready.go:81] duration metric: took 403.971348ms for pod "etcd-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.033590  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.430827  951650 pod_ready.go:92] pod "kube-apiserver-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.430860  951650 pod_ready.go:81] duration metric: took 397.25973ms for pod "kube-apiserver-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.430876  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.828414  951650 pod_ready.go:92] pod "kube-controller-manager-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:16.828450  951650 pod_ready.go:81] duration metric: took 397.564618ms for pod "kube-controller-manager-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:16.828464  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.229295  951650 pod_ready.go:92] pod "kube-proxy-wbk4h" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:17.229325  951650 pod_ready.go:81] duration metric: took 400.852537ms for pod "kube-proxy-wbk4h" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.229338  951650 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.628615  951650 pod_ready.go:92] pod "kube-scheduler-pause-851116" in "kube-system" namespace has status "Ready":"True"
	I0308 04:02:17.628642  951650 pod_ready.go:81] duration metric: took 399.296089ms for pod "kube-scheduler-pause-851116" in "kube-system" namespace to be "Ready" ...
	I0308 04:02:17.628651  951650 pod_ready.go:38] duration metric: took 2.261657544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:02:17.628667  951650 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:02:17.628725  951650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:02:17.644197  951650 api_server.go:72] duration metric: took 2.469456847s to wait for apiserver process to appear ...
	I0308 04:02:17.644227  951650 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:02:17.644251  951650 api_server.go:253] Checking apiserver healthz at https://192.168.83.77:8443/healthz ...
	I0308 04:02:17.649378  951650 api_server.go:279] https://192.168.83.77:8443/healthz returned 200:
	ok
	I0308 04:02:17.650674  951650 api_server.go:141] control plane version: v1.28.4
	I0308 04:02:17.650701  951650 api_server.go:131] duration metric: took 6.464598ms to wait for apiserver health ...
	I0308 04:02:17.650720  951650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:02:17.832023  951650 system_pods.go:59] 6 kube-system pods found
	I0308 04:02:17.832067  951650 system_pods.go:61] "coredns-5dd5756b68-2fsb6" [3bf44768-d86f-46b4-b0d1-d164f794e9ba] Running
	I0308 04:02:17.832074  951650 system_pods.go:61] "etcd-pause-851116" [6dd9d85c-344b-4354-81f4-42e72ae1d443] Running
	I0308 04:02:17.832079  951650 system_pods.go:61] "kube-apiserver-pause-851116" [c83fa9ab-6cde-46fd-a4cc-e6081f4e1634] Running
	I0308 04:02:17.832084  951650 system_pods.go:61] "kube-controller-manager-pause-851116" [c963e21a-f2ad-4a2d-a434-f0c5435d5c15] Running
	I0308 04:02:17.832088  951650 system_pods.go:61] "kube-proxy-wbk4h" [e29ff4ab-c8ac-470a-a28f-ebc871a56d1e] Running
	I0308 04:02:17.832092  951650 system_pods.go:61] "kube-scheduler-pause-851116" [7419809a-3421-4e63-abc5-3c6a1b0e641c] Running
	I0308 04:02:17.832108  951650 system_pods.go:74] duration metric: took 181.372536ms to wait for pod list to return data ...
	I0308 04:02:17.832119  951650 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:02:18.028968  951650 default_sa.go:45] found service account: "default"
	I0308 04:02:18.029001  951650 default_sa.go:55] duration metric: took 196.871761ms for default service account to be created ...
	I0308 04:02:18.029011  951650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:02:18.232859  951650 system_pods.go:86] 6 kube-system pods found
	I0308 04:02:18.232903  951650 system_pods.go:89] "coredns-5dd5756b68-2fsb6" [3bf44768-d86f-46b4-b0d1-d164f794e9ba] Running
	I0308 04:02:18.232911  951650 system_pods.go:89] "etcd-pause-851116" [6dd9d85c-344b-4354-81f4-42e72ae1d443] Running
	I0308 04:02:18.232917  951650 system_pods.go:89] "kube-apiserver-pause-851116" [c83fa9ab-6cde-46fd-a4cc-e6081f4e1634] Running
	I0308 04:02:18.232923  951650 system_pods.go:89] "kube-controller-manager-pause-851116" [c963e21a-f2ad-4a2d-a434-f0c5435d5c15] Running
	I0308 04:02:18.232929  951650 system_pods.go:89] "kube-proxy-wbk4h" [e29ff4ab-c8ac-470a-a28f-ebc871a56d1e] Running
	I0308 04:02:18.232934  951650 system_pods.go:89] "kube-scheduler-pause-851116" [7419809a-3421-4e63-abc5-3c6a1b0e641c] Running
	I0308 04:02:18.232944  951650 system_pods.go:126] duration metric: took 203.925067ms to wait for k8s-apps to be running ...
	I0308 04:02:18.232954  951650 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:02:18.233036  951650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:02:18.248074  951650 system_svc.go:56] duration metric: took 15.106878ms WaitForService to wait for kubelet
	I0308 04:02:18.248112  951650 kubeadm.go:576] duration metric: took 3.073378532s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:02:18.248141  951650 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:02:18.429809  951650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:02:18.429834  951650 node_conditions.go:123] node cpu capacity is 2
	I0308 04:02:18.429848  951650 node_conditions.go:105] duration metric: took 181.700492ms to run NodePressure ...
	I0308 04:02:18.429864  951650 start.go:240] waiting for startup goroutines ...
	I0308 04:02:18.429875  951650 start.go:245] waiting for cluster config update ...
	I0308 04:02:18.429885  951650 start.go:254] writing updated cluster config ...
	I0308 04:02:18.430226  951650 ssh_runner.go:195] Run: rm -f paused
	I0308 04:02:18.488446  951650 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:02:18.490554  951650 out.go:177] * Done! kubectl is now configured to use "pause-851116" cluster and "default" namespace by default
	I0308 04:02:15.692240  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:15.692807  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:15.692830  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:15.692743  952500 retry.go:31] will retry after 2.66528766s: waiting for machine to come up
	I0308 04:02:18.360171  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | domain cert-expiration-401581 has defined MAC address 52:54:00:6d:41:a8 in network mk-cert-expiration-401581
	I0308 04:02:18.360663  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | unable to find current IP address of domain cert-expiration-401581 in network mk-cert-expiration-401581
	I0308 04:02:18.360689  952273 main.go:141] libmachine: (cert-expiration-401581) DBG | I0308 04:02:18.360618  952500 retry.go:31] will retry after 3.424913064s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.279581388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870541279560148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b30e9ca0-00fd-456b-8449-5c781f1fda83 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.283635321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932fa2f5-fef6-4d8a-843c-adfaac6acc3d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.283725068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932fa2f5-fef6-4d8a-843c-adfaac6acc3d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.284046345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932fa2f5-fef6-4d8a-843c-adfaac6acc3d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.327157972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31f2f811-528e-4863-8ad8-4e54b748c7a4 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.327270184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31f2f811-528e-4863-8ad8-4e54b748c7a4 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.331318068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d19ad913-e2e1-4d5d-b7fa-d8a3c3b72db7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.331647338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870541331629563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d19ad913-e2e1-4d5d-b7fa-d8a3c3b72db7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.332360148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2859c9c5-0545-46f0-ba16-9f8f0b66b86d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.332436518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2859c9c5-0545-46f0-ba16-9f8f0b66b86d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.332655721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2859c9c5-0545-46f0-ba16-9f8f0b66b86d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.376703641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=606801a9-bcc3-4038-a3f1-66917dafa2f6 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.376769314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=606801a9-bcc3-4038-a3f1-66917dafa2f6 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.378781015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6565e98d-d5ad-48b5-87d9-4159e1bddec1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.379226346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870541379198607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6565e98d-d5ad-48b5-87d9-4159e1bddec1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.380120441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02b09b24-bfb3-42fc-bee9-5a9c6375e3d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.380174703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02b09b24-bfb3-42fc-bee9-5a9c6375e3d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.380412066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02b09b24-bfb3-42fc-bee9-5a9c6375e3d8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.422495084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d7627d7-acc0-4668-b54b-58138431d5f5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.422618875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d7627d7-acc0-4668-b54b-58138431d5f5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.424307515Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f430c6d6-6d4e-40dd-aef0-0f95af68f46b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.424665309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709870541424646521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f430c6d6-6d4e-40dd-aef0-0f95af68f46b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.425439994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21045ad7-a55c-4cc3-8936-0aae62886a94 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.425494773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21045ad7-a55c-4cc3-8936-0aae62886a94 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:02:21 pause-851116 crio[2298]: time="2024-03-08 04:02:21.425732496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709870525020708436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709870524967456694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709870520335616380,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annot
ations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709870520326285135,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a481251b024192a7ac7779eea579bc0
4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709870520304873673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:ma
p[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709870520303116457,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7,PodSandboxId:48e1395f0fe7f4102d78aad42ded1eaf0fd548f45077119f225c6eef201f6700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1709870496156465596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wbk4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e29ff4ab-c8ac-470a-a28f-ebc871a56d1e,},Annotations:map[string]string{io.kubernetes.container.hash: 60f110
be,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa,PodSandboxId:dc8ce2e49bbbbd1c95e2ccc2c626520afc274a6d3adf14ea33bce9a5b32cedd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1709870496552886246,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2fsb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf44768-d86f-46b4-b0d1-d164f794e9ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3c371f33,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3,PodSandboxId:2f9176baaea382f248445ca3065c5621cda5dbd965e4318eb66cea0021cbf993,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1709870496374695195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56080b9b146d8da78087e166218e6d4,},Annotations:map[string]string{io.kubernetes.container.hash: edd5929a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d,PodSandboxId:a8386b75b008d96b74b54f1b19ea5b0bc4b38f6e6f75004ef2f6f261ae7245bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1709870496321128629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-851116,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 72bbc6e5537cc97de543394c56a71a93,},Annotations:map[string]string{io.kubernetes.container.hash: 6562e870,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237,PodSandboxId:d5f2bfc8e42b966cff3e96f5c9b1d0ca0f7198dbfcb9f5fc170521b85d4eee4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1709870496206257652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: cce25ff0413bd283fdc3c58ec08c8ac8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a,PodSandboxId:22008928f2434babde46b4046def3542b4033b53bb0120a77210458fb94ae773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1709870496184535242,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-851116,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: a481251b024192a7ac7779eea579bc04,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21045ad7-a55c-4cc3-8936-0aae62886a94 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ddcdf4c43dfd       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 seconds ago      Running             kube-proxy                2                   48e1395f0fe7f       kube-proxy-wbk4h
	956c3c7f5596e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago      Running             coredns                   2                   dc8ce2e49bbbb       coredns-5dd5756b68-2fsb6
	dfb6a2dd51f3e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   21 seconds ago      Running             etcd                      2                   a8386b75b008d       etcd-pause-851116
	ded9362be3954       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   21 seconds ago      Running             kube-controller-manager   2                   22008928f2434       kube-controller-manager-pause-851116
	6f1a9d669b6c1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   21 seconds ago      Running             kube-apiserver            2                   2f9176baaea38       kube-apiserver-pause-851116
	707066bebea8f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   21 seconds ago      Running             kube-scheduler            2                   d5f2bfc8e42b9       kube-scheduler-pause-851116
	139651010c6a1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   44 seconds ago      Exited              coredns                   1                   dc8ce2e49bbbb       coredns-5dd5756b68-2fsb6
	480c64bb66896       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   45 seconds ago      Exited              kube-apiserver            1                   2f9176baaea38       kube-apiserver-pause-851116
	9b285b709a8d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   45 seconds ago      Exited              etcd                      1                   a8386b75b008d       etcd-pause-851116
	1d92a46ebae62       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   45 seconds ago      Exited              kube-scheduler            1                   d5f2bfc8e42b9       kube-scheduler-pause-851116
	9d8e855a1dd49       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   45 seconds ago      Exited              kube-controller-manager   1                   22008928f2434       kube-controller-manager-pause-851116
	1c2505a6294ca       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   45 seconds ago      Exited              kube-proxy                1                   48e1395f0fe7f       kube-proxy-wbk4h
	
	
	==> coredns [139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34095 - 4764 "HINFO IN 7577450993068145099.5917319491977946482. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063747664s
	
	
	==> coredns [956c3c7f5596ef9eaf97659c3be7b8ab486be3c2b75264503d1ba94b4bdfae3a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56306 - 25975 "HINFO IN 7224382692144182061.814941936348778439. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011050357s
	
	
	==> describe nodes <==
	Name:               pause-851116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-851116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=pause-851116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_00_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:00:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-851116
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:02:03 +0000   Fri, 08 Mar 2024 04:00:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.77
	  Hostname:    pause-851116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b3d68dd9f834ada8d595888f9f0f884
	  System UUID:                3b3d68dd-9f83-4ada-8d59-5888f9f0f884
	  Boot ID:                    e69c68d2-80a9-45b5-9e3a-a8b4d04836ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2fsb6                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-pause-851116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-851116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-851116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-wbk4h                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-pause-851116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 40s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeReady                119s                 kubelet          Node pause-851116 status is now: NodeReady
	  Normal  RegisteredNode           109s                 node-controller  Node pause-851116 event: Registered Node pause-851116 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-851116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-851116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-851116 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-851116 event: Registered Node pause-851116 in Controller
	
	
	==> dmesg <==
	[  +0.064426] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072896] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.199988] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.161125] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.305025] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +5.650465] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.063277] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.720762] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.542086] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.810513] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.079598] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.351350] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.165467] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.644924] kauditd_printk_skb: 80 callbacks suppressed
	[Mar 8 04:01] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.152994] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.176820] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.164080] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.325007] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +3.898531] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +3.953689] kauditd_printk_skb: 191 callbacks suppressed
	[ +18.228401] systemd-fstab-generator[3180]: Ignoring "noauto" option for root device
	[Mar 8 04:02] kauditd_printk_skb: 43 callbacks suppressed
	[ +10.217336] systemd-fstab-generator[3631]: Ignoring "noauto" option for root device
	[  +0.085255] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d] <==
	{"level":"info","ts":"2024-03-08T04:01:37.50915Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:38.998401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 2"}
	{"level":"info","ts":"2024-03-08T04:01:38.998492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.9985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.998512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:38.998521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:01:39.005362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-851116 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:01:39.005568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:01:39.007136Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:01:39.007159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:01:39.007178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:01:39.007334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:01:39.008204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2024-03-08T04:01:57.511055Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-08T04:01:57.511215Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-851116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	{"level":"warn","ts":"2024-03-08T04:01:57.5113Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.511352Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.513325Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-08T04:01:57.513373Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.77:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-08T04:01:57.51347Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a3b04ba9ccd2eedd","current-leader-member-id":"a3b04ba9ccd2eedd"}
	{"level":"info","ts":"2024-03-08T04:01:57.517139Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:57.517296Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:01:57.517321Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-851116","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"]}
	
	
	==> etcd [dfb6a2dd51f3ebf61c701d35f8eae213ed3f9b4d2b44d09b7ebae8aaa17b7e64] <==
	{"level":"info","ts":"2024-03-08T04:02:01.081983Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:02:01.082104Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T04:02:01.083116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd switched to configuration voters=(11795010616741261021)"}
	{"level":"info","ts":"2024-03-08T04:02:01.08514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","added-peer-id":"a3b04ba9ccd2eedd","added-peer-peer-urls":["https://192.168.83.77:2380"]}
	{"level":"info","ts":"2024-03-08T04:02:01.087133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24a0af5c19e7de30","local-member-id":"a3b04ba9ccd2eedd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:02:01.087338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:02:01.100384Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:02:01.10058Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a3b04ba9ccd2eedd","initial-advertise-peer-urls":["https://192.168.83.77:2380"],"listen-peer-urls":["https://192.168.83.77:2380"],"advertise-client-urls":["https://192.168.83.77:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.77:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:02:01.100633Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:02:01.100673Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:02:01.100704Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.77:2380"}
	{"level":"info","ts":"2024-03-08T04:02:02.316597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgPreVoteResp from a3b04ba9ccd2eedd at term 3"}
	{"level":"info","ts":"2024-03-08T04:02:02.316807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became candidate at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.316844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd received MsgVoteResp from a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.316885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a3b04ba9ccd2eedd became leader at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.317037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a3b04ba9ccd2eedd elected leader a3b04ba9ccd2eedd at term 4"}
	{"level":"info","ts":"2024-03-08T04:02:02.321861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:02:02.321862Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a3b04ba9ccd2eedd","local-member-attributes":"{Name:pause-851116 ClientURLs:[https://192.168.83.77:2379]}","request-path":"/0/members/a3b04ba9ccd2eedd/attributes","cluster-id":"24a0af5c19e7de30","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:02:02.322308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:02:02.323194Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.77:2379"}
	{"level":"info","ts":"2024-03-08T04:02:02.323553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:02:02.323593Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:02:02.324156Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:02:21 up 2 min,  0 users,  load average: 0.85, 0.36, 0.13
	Linux pause-851116 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3] <==
	I0308 04:01:47.453889       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0308 04:01:47.454048       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0308 04:01:47.454117       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0308 04:01:47.454181       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 04:01:47.454237       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0308 04:01:47.454267       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0308 04:01:47.454322       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0308 04:01:47.454369       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0308 04:01:47.454392       1 available_controller.go:439] Shutting down AvailableConditionController
	I0308 04:01:47.455402       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0308 04:01:47.456087       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:01:47.456181       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:01:47.456216       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0308 04:01:47.456289       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0308 04:01:47.456369       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0308 04:01:47.456467       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0308 04:01:47.456523       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 04:01:47.463367       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 04:01:47.467990       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0308 04:01:47.469045       1 controller.go:159] Shutting down quota evaluator
	I0308 04:01:47.469108       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469452       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469508       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469521       1 controller.go:178] quota evaluator worker shutdown
	I0308 04:01:47.469531       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [6f1a9d669b6c114a547415e0e0cb5d995441b3bd411291055982219d4d6c9619] <==
	I0308 04:02:03.619171       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 04:02:03.620744       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0308 04:02:03.620784       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 04:02:03.709296       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 04:02:03.709384       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 04:02:03.709486       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 04:02:03.711442       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 04:02:03.716762       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 04:02:03.718325       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 04:02:03.718855       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 04:02:03.720863       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 04:02:03.724712       1 aggregator.go:166] initial CRD sync complete...
	I0308 04:02:03.724763       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 04:02:03.724770       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 04:02:03.724777       1 cache.go:39] Caches are synced for autoregister controller
	I0308 04:02:03.764268       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 04:02:04.615609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 04:02:05.312227       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.83.77]
	I0308 04:02:05.314045       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 04:02:05.322152       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 04:02:05.450768       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 04:02:05.465085       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 04:02:05.516748       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 04:02:05.551225       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 04:02:05.562556       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a] <==
	I0308 04:01:42.806982       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0308 04:01:42.809860       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0308 04:01:42.810155       1 disruption.go:433] "Sending events to api server."
	I0308 04:01:42.810211       1 disruption.go:444] "Starting disruption controller"
	I0308 04:01:42.810236       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0308 04:01:42.812783       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0308 04:01:42.813140       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0308 04:01:42.813194       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0308 04:01:42.816514       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0308 04:01:42.816776       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0308 04:01:42.816812       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0308 04:01:42.820696       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0308 04:01:42.820773       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0308 04:01:42.821083       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0308 04:01:42.834106       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0308 04:01:42.834194       1 namespace_controller.go:197] "Starting namespace controller"
	I0308 04:01:42.834372       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0308 04:01:42.836531       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0308 04:01:42.836707       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0308 04:01:42.851959       1 shared_informer.go:318] Caches are synced for tokens
	W0308 04:01:52.841125       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:53.342319       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:54.343447       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	W0308 04:01:56.344623       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.83.77:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.83.77:8443: connect: connection refused
	E0308 04:01:56.345015       1 cidr_allocator.go:156] "Failed to list all nodes" err="Get \"https://192.168.83.77:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-controller-manager [ded9362be3954c24b9585bdb4ed9921e5150377c309f27a207533012ccb85df7] <==
	I0308 04:02:16.394516       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0308 04:02:16.395009       1 shared_informer.go:318] Caches are synced for ephemeral
	I0308 04:02:16.395068       1 shared_informer.go:318] Caches are synced for stateful set
	I0308 04:02:16.395107       1 shared_informer.go:318] Caches are synced for cronjob
	I0308 04:02:16.395208       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0308 04:02:16.396724       1 shared_informer.go:318] Caches are synced for HPA
	I0308 04:02:16.398985       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0308 04:02:16.400037       1 shared_informer.go:318] Caches are synced for daemon sets
	I0308 04:02:16.402301       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0308 04:02:16.409025       1 shared_informer.go:318] Caches are synced for crt configmap
	I0308 04:02:16.409148       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0308 04:02:16.411061       1 shared_informer.go:318] Caches are synced for GC
	I0308 04:02:16.413130       1 shared_informer.go:318] Caches are synced for job
	I0308 04:02:16.433322       1 shared_informer.go:318] Caches are synced for namespace
	I0308 04:02:16.446140       1 shared_informer.go:318] Caches are synced for persistent volume
	I0308 04:02:16.459989       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0308 04:02:16.502800       1 shared_informer.go:318] Caches are synced for deployment
	I0308 04:02:16.505644       1 shared_informer.go:318] Caches are synced for disruption
	I0308 04:02:16.511649       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0308 04:02:16.511821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.418µs"
	I0308 04:02:16.516079       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 04:02:16.577412       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 04:02:16.941955       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 04:02:16.942121       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 04:02:16.952380       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7] <==
	I0308 04:01:37.870702       1 server_others.go:69] "Using iptables proxy"
	I0308 04:01:40.859454       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0308 04:01:40.986223       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:01:40.986251       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:01:40.994696       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:01:40.995098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:01:40.995553       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:01:40.996152       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:01:40.997853       1 config.go:188] "Starting service config controller"
	I0308 04:01:40.998166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:01:40.998275       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:01:40.998306       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:01:40.999331       1 config.go:315] "Starting node config controller"
	I0308 04:01:40.999457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:01:41.098808       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 04:01:41.099037       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:01:41.100331       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [7ddcdf4c43dfd4135b8a62ceb61adac1ad3175598673f1bef3b615dcc4626f9a] <==
	I0308 04:02:05.279635       1 server_others.go:69] "Using iptables proxy"
	I0308 04:02:05.308351       1 node.go:141] Successfully retrieved node IP: 192.168.83.77
	I0308 04:02:05.377403       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:02:05.377425       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:02:05.380223       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:02:05.380275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:02:05.380408       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:02:05.380416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:02:05.381698       1 config.go:188] "Starting service config controller"
	I0308 04:02:05.381709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:02:05.381735       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:02:05.381738       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:02:05.382312       1 config.go:315] "Starting node config controller"
	I0308 04:02:05.382320       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:02:05.483113       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:02:05.483252       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:02:05.483271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237] <==
	I0308 04:01:38.536505       1 serving.go:348] Generated self-signed cert in-memory
	W0308 04:01:40.787681       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 04:01:40.787992       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:01:40.788073       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 04:01:40.788162       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 04:01:40.845347       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:01:40.846067       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:01:40.857031       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:01:40.857112       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:01:40.860612       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:01:40.861372       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:01:40.959261       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:01:57.649610       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0308 04:01:57.649724       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0308 04:01:57.649859       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [707066bebea8f544133f627750e45d121a753f82cd52eeabe7c59395a763fc93] <==
	I0308 04:02:01.448427       1 serving.go:348] Generated self-signed cert in-memory
	W0308 04:02:03.659426       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 04:02:03.659510       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:02:03.659544       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 04:02:03.659567       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 04:02:03.727387       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:02:03.727439       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:02:03.734469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:02:03.734547       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:02:03.737351       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:02:03.737435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:02:03.836134       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.073720    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e56080b9b146d8da78087e166218e6d4-usr-share-ca-certificates\") pod \"kube-apiserver-pause-851116\" (UID: \"e56080b9b146d8da78087e166218e6d4\") " pod="kube-system/kube-apiserver-pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.073740    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a481251b024192a7ac7779eea579bc04-flexvolume-dir\") pod \"kube-controller-manager-pause-851116\" (UID: \"a481251b024192a7ac7779eea579bc04\") " pod="kube-system/kube-controller-manager-pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.270788    3187 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-851116?timeout=10s\": dial tcp 192.168.83.77:8443: connect: connection refused" interval="800ms"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.274993    3187 scope.go:117] "RemoveContainer" containerID="480c64bb6689609610eb7794b4e68b2cb5eed8f56615171e2d9ca03d5c0c43f3"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.276322    3187 scope.go:117] "RemoveContainer" containerID="9d8e855a1dd49b22f0e2f0b36e13420029da64e86ff5f3e53cf077c88a96971a"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.279036    3187 scope.go:117] "RemoveContainer" containerID="9b285b709a8d51f114eed03d4cb3063331fe33d821bf0523e797e3c2c605060d"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.279682    3187 scope.go:117] "RemoveContainer" containerID="1d92a46ebae62f1b8cdd8f264f355d767691e21bc9ecf2b751b72c9e5bda9237"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: I0308 04:02:00.367042    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.367760    3187 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.77:8443: connect: connection refused" node="pause-851116"
	Mar 08 04:02:00 pause-851116 kubelet[3187]: W0308 04:02:00.777615    3187 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Mar 08 04:02:00 pause-851116 kubelet[3187]: E0308 04:02:00.777676    3187 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.83.77:8443: connect: connection refused
	Mar 08 04:02:01 pause-851116 kubelet[3187]: I0308 04:02:01.169528    3187 kubelet_node_status.go:70] "Attempting to register node" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.764711    3187 kubelet_node_status.go:108] "Node was previously registered" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.765322    3187 kubelet_node_status.go:73] "Successfully registered node" node="pause-851116"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.767553    3187 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: I0308 04:02:03.773763    3187 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 04:02:03 pause-851116 kubelet[3187]: E0308 04:02:03.837163    3187 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-851116\" already exists" pod="kube-system/kube-apiserver-pause-851116"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.643604    3187 apiserver.go:52] "Watching apiserver"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.648775    3187 topology_manager.go:215] "Topology Admit Handler" podUID="3bf44768-d86f-46b4-b0d1-d164f794e9ba" podNamespace="kube-system" podName="coredns-5dd5756b68-2fsb6"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.649003    3187 topology_manager.go:215] "Topology Admit Handler" podUID="e29ff4ab-c8ac-470a-a28f-ebc871a56d1e" podNamespace="kube-system" podName="kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.660105    3187 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.699844    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29ff4ab-c8ac-470a-a28f-ebc871a56d1e-lib-modules\") pod \"kube-proxy-wbk4h\" (UID: \"e29ff4ab-c8ac-470a-a28f-ebc871a56d1e\") " pod="kube-system/kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.700029    3187 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29ff4ab-c8ac-470a-a28f-ebc871a56d1e-xtables-lock\") pod \"kube-proxy-wbk4h\" (UID: \"e29ff4ab-c8ac-470a-a28f-ebc871a56d1e\") " pod="kube-system/kube-proxy-wbk4h"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.949746    3187 scope.go:117] "RemoveContainer" containerID="1c2505a6294ca5d405f7236e71b8ef1ca04904e7c7e5f6357e9edd370a9659b7"
	Mar 08 04:02:04 pause-851116 kubelet[3187]: I0308 04:02:04.950951    3187 scope.go:117] "RemoveContainer" containerID="139651010c6a1dfeb590e2f5f1a39da1042c4a5d791cec50f689e9632af26bfa"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-851116 -n pause-851116
helpers_test.go:261: (dbg) Run:  kubectl --context pause-851116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (281.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m41.673989091s)

                                                
                                                
-- stdout --
	* [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:04:03.481454  956560 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:04:03.481574  956560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:04:03.481590  956560 out.go:304] Setting ErrFile to fd 2...
	I0308 04:04:03.481594  956560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:04:03.481807  956560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:04:03.482409  956560 out.go:298] Setting JSON to false
	I0308 04:04:03.483515  956560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27970,"bootTime":1709842674,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:04:03.484079  956560 start.go:139] virtualization: kvm guest
	I0308 04:04:03.487038  956560 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:04:03.488909  956560 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:04:03.490435  956560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:04:03.488909  956560 notify.go:220] Checking for updates...
	I0308 04:04:03.492167  956560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:04:03.493686  956560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:04:03.495125  956560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:04:03.496515  956560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:04:03.498585  956560 config.go:182] Loaded profile config "cert-expiration-401581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:04:03.498755  956560 config.go:182] Loaded profile config "force-systemd-env-292856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:04:03.498912  956560 config.go:182] Loaded profile config "kubernetes-upgrade-219954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:04:03.499048  956560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:04:03.541326  956560 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:04:03.542661  956560 start.go:297] selected driver: kvm2
	I0308 04:04:03.542680  956560 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:04:03.542696  956560 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:04:03.543465  956560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:04:03.543596  956560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:04:03.560341  956560 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:04:03.560393  956560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 04:04:03.560622  956560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:04:03.560697  956560 cni.go:84] Creating CNI manager for ""
	I0308 04:04:03.560710  956560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:04:03.560719  956560 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 04:04:03.560769  956560 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:04:03.560866  956560 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:04:03.563692  956560 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:04:03.565119  956560 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:04:03.565199  956560 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:04:03.565214  956560 cache.go:56] Caching tarball of preloaded images
	I0308 04:04:03.565339  956560 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:04:03.565356  956560 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:04:03.565459  956560 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:04:03.565478  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json: {Name:mk271a2bcf5b0fb2154966e5daca22ef37eb427b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:03.565642  956560 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:04:12.354588  956560 start.go:364] duration metric: took 8.788882277s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:04:12.354667  956560 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:04:12.354807  956560 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 04:04:12.357067  956560 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 04:04:12.357314  956560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:04:12.357385  956560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:04:12.376323  956560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0308 04:04:12.376854  956560 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:04:12.377531  956560 main.go:141] libmachine: Using API Version  1
	I0308 04:04:12.377566  956560 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:04:12.377969  956560 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:04:12.378217  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:04:12.378401  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:12.378585  956560 start.go:159] libmachine.API.Create for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:04:12.378653  956560 client.go:168] LocalClient.Create starting
	I0308 04:04:12.378694  956560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 04:04:12.378740  956560 main.go:141] libmachine: Decoding PEM data...
	I0308 04:04:12.378767  956560 main.go:141] libmachine: Parsing certificate...
	I0308 04:04:12.378839  956560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 04:04:12.378868  956560 main.go:141] libmachine: Decoding PEM data...
	I0308 04:04:12.378884  956560 main.go:141] libmachine: Parsing certificate...
	I0308 04:04:12.378904  956560 main.go:141] libmachine: Running pre-create checks...
	I0308 04:04:12.378916  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .PreCreateCheck
	I0308 04:04:12.379600  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:04:12.380107  956560 main.go:141] libmachine: Creating machine...
	I0308 04:04:12.380127  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .Create
	I0308 04:04:12.380317  956560 main.go:141] libmachine: (old-k8s-version-496808) Creating KVM machine...
	I0308 04:04:12.381700  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found existing default KVM network
	I0308 04:04:12.383050  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:12.382896  956779 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015730}
	I0308 04:04:12.383092  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | created network xml: 
	I0308 04:04:12.383115  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | <network>
	I0308 04:04:12.383134  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   <name>mk-old-k8s-version-496808</name>
	I0308 04:04:12.383149  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   <dns enable='no'/>
	I0308 04:04:12.383156  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   
	I0308 04:04:12.383166  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 04:04:12.383183  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |     <dhcp>
	I0308 04:04:12.383196  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 04:04:12.383211  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |     </dhcp>
	I0308 04:04:12.383227  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   </ip>
	I0308 04:04:12.383237  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG |   
	I0308 04:04:12.383268  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | </network>
	I0308 04:04:12.383300  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | 
	I0308 04:04:12.389080  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | trying to create private KVM network mk-old-k8s-version-496808 192.168.39.0/24...
	I0308 04:04:12.460541  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | private KVM network mk-old-k8s-version-496808 192.168.39.0/24 created
	I0308 04:04:12.460592  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808 ...
	I0308 04:04:12.460608  956560 main.go:141] libmachine: (old-k8s-version-496808) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 04:04:12.460621  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:12.460513  956779 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:04:12.460883  956560 main.go:141] libmachine: (old-k8s-version-496808) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 04:04:12.712696  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:12.712557  956779 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa...
	I0308 04:04:12.900758  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:12.900586  956779 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/old-k8s-version-496808.rawdisk...
	I0308 04:04:12.900801  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Writing magic tar header
	I0308 04:04:12.900846  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Writing SSH key tar header
	I0308 04:04:12.900916  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:12.900707  956779 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808 ...
	I0308 04:04:12.900937  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808 (perms=drwx------)
	I0308 04:04:12.900950  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808
	I0308 04:04:12.900972  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 04:04:12.901001  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 04:04:12.901011  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:04:12.901030  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 04:04:12.901046  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 04:04:12.901064  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 04:04:12.901075  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 04:04:12.901096  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home/jenkins
	I0308 04:04:12.901106  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Checking permissions on dir: /home
	I0308 04:04:12.901112  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Skipping /home - not owner
	I0308 04:04:12.901128  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 04:04:12.901140  956560 main.go:141] libmachine: (old-k8s-version-496808) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 04:04:12.901153  956560 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:04:12.902238  956560 main.go:141] libmachine: (old-k8s-version-496808) define libvirt domain using xml: 
	I0308 04:04:12.902263  956560 main.go:141] libmachine: (old-k8s-version-496808) <domain type='kvm'>
	I0308 04:04:12.902279  956560 main.go:141] libmachine: (old-k8s-version-496808)   <name>old-k8s-version-496808</name>
	I0308 04:04:12.902292  956560 main.go:141] libmachine: (old-k8s-version-496808)   <memory unit='MiB'>2200</memory>
	I0308 04:04:12.902301  956560 main.go:141] libmachine: (old-k8s-version-496808)   <vcpu>2</vcpu>
	I0308 04:04:12.902313  956560 main.go:141] libmachine: (old-k8s-version-496808)   <features>
	I0308 04:04:12.902325  956560 main.go:141] libmachine: (old-k8s-version-496808)     <acpi/>
	I0308 04:04:12.902332  956560 main.go:141] libmachine: (old-k8s-version-496808)     <apic/>
	I0308 04:04:12.902343  956560 main.go:141] libmachine: (old-k8s-version-496808)     <pae/>
	I0308 04:04:12.902359  956560 main.go:141] libmachine: (old-k8s-version-496808)     
	I0308 04:04:12.902386  956560 main.go:141] libmachine: (old-k8s-version-496808)   </features>
	I0308 04:04:12.902407  956560 main.go:141] libmachine: (old-k8s-version-496808)   <cpu mode='host-passthrough'>
	I0308 04:04:12.902416  956560 main.go:141] libmachine: (old-k8s-version-496808)   
	I0308 04:04:12.902427  956560 main.go:141] libmachine: (old-k8s-version-496808)   </cpu>
	I0308 04:04:12.902439  956560 main.go:141] libmachine: (old-k8s-version-496808)   <os>
	I0308 04:04:12.902448  956560 main.go:141] libmachine: (old-k8s-version-496808)     <type>hvm</type>
	I0308 04:04:12.902454  956560 main.go:141] libmachine: (old-k8s-version-496808)     <boot dev='cdrom'/>
	I0308 04:04:12.902460  956560 main.go:141] libmachine: (old-k8s-version-496808)     <boot dev='hd'/>
	I0308 04:04:12.902466  956560 main.go:141] libmachine: (old-k8s-version-496808)     <bootmenu enable='no'/>
	I0308 04:04:12.902472  956560 main.go:141] libmachine: (old-k8s-version-496808)   </os>
	I0308 04:04:12.902478  956560 main.go:141] libmachine: (old-k8s-version-496808)   <devices>
	I0308 04:04:12.902488  956560 main.go:141] libmachine: (old-k8s-version-496808)     <disk type='file' device='cdrom'>
	I0308 04:04:12.902505  956560 main.go:141] libmachine: (old-k8s-version-496808)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/boot2docker.iso'/>
	I0308 04:04:12.902517  956560 main.go:141] libmachine: (old-k8s-version-496808)       <target dev='hdc' bus='scsi'/>
	I0308 04:04:12.902537  956560 main.go:141] libmachine: (old-k8s-version-496808)       <readonly/>
	I0308 04:04:12.902558  956560 main.go:141] libmachine: (old-k8s-version-496808)     </disk>
	I0308 04:04:12.902569  956560 main.go:141] libmachine: (old-k8s-version-496808)     <disk type='file' device='disk'>
	I0308 04:04:12.902582  956560 main.go:141] libmachine: (old-k8s-version-496808)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 04:04:12.902611  956560 main.go:141] libmachine: (old-k8s-version-496808)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/old-k8s-version-496808.rawdisk'/>
	I0308 04:04:12.902623  956560 main.go:141] libmachine: (old-k8s-version-496808)       <target dev='hda' bus='virtio'/>
	I0308 04:04:12.902648  956560 main.go:141] libmachine: (old-k8s-version-496808)     </disk>
	I0308 04:04:12.902682  956560 main.go:141] libmachine: (old-k8s-version-496808)     <interface type='network'>
	I0308 04:04:12.902697  956560 main.go:141] libmachine: (old-k8s-version-496808)       <source network='mk-old-k8s-version-496808'/>
	I0308 04:04:12.902709  956560 main.go:141] libmachine: (old-k8s-version-496808)       <model type='virtio'/>
	I0308 04:04:12.902722  956560 main.go:141] libmachine: (old-k8s-version-496808)     </interface>
	I0308 04:04:12.902733  956560 main.go:141] libmachine: (old-k8s-version-496808)     <interface type='network'>
	I0308 04:04:12.902744  956560 main.go:141] libmachine: (old-k8s-version-496808)       <source network='default'/>
	I0308 04:04:12.902759  956560 main.go:141] libmachine: (old-k8s-version-496808)       <model type='virtio'/>
	I0308 04:04:12.902771  956560 main.go:141] libmachine: (old-k8s-version-496808)     </interface>
	I0308 04:04:12.902781  956560 main.go:141] libmachine: (old-k8s-version-496808)     <serial type='pty'>
	I0308 04:04:12.902794  956560 main.go:141] libmachine: (old-k8s-version-496808)       <target port='0'/>
	I0308 04:04:12.902805  956560 main.go:141] libmachine: (old-k8s-version-496808)     </serial>
	I0308 04:04:12.902817  956560 main.go:141] libmachine: (old-k8s-version-496808)     <console type='pty'>
	I0308 04:04:12.902830  956560 main.go:141] libmachine: (old-k8s-version-496808)       <target type='serial' port='0'/>
	I0308 04:04:12.902840  956560 main.go:141] libmachine: (old-k8s-version-496808)     </console>
	I0308 04:04:12.902847  956560 main.go:141] libmachine: (old-k8s-version-496808)     <rng model='virtio'>
	I0308 04:04:12.902864  956560 main.go:141] libmachine: (old-k8s-version-496808)       <backend model='random'>/dev/random</backend>
	I0308 04:04:12.902873  956560 main.go:141] libmachine: (old-k8s-version-496808)     </rng>
	I0308 04:04:12.902881  956560 main.go:141] libmachine: (old-k8s-version-496808)     
	I0308 04:04:12.902891  956560 main.go:141] libmachine: (old-k8s-version-496808)     
	I0308 04:04:12.902909  956560 main.go:141] libmachine: (old-k8s-version-496808)   </devices>
	I0308 04:04:12.902929  956560 main.go:141] libmachine: (old-k8s-version-496808) </domain>
	I0308 04:04:12.902945  956560 main.go:141] libmachine: (old-k8s-version-496808) 
	I0308 04:04:12.907290  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:cb:0b:3f in network default
	I0308 04:04:12.908018  956560 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:04:12.908044  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:12.908985  956560 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:04:12.909364  956560 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:04:12.910030  956560 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:04:12.911044  956560 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:04:14.238567  956560 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:04:14.239475  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:14.239946  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:14.240034  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:14.239918  956779 retry.go:31] will retry after 255.529524ms: waiting for machine to come up
	I0308 04:04:14.497634  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:14.498215  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:14.498269  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:14.498162  956779 retry.go:31] will retry after 368.440696ms: waiting for machine to come up
	I0308 04:04:14.868963  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:14.869557  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:14.869590  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:14.869502  956779 retry.go:31] will retry after 465.663722ms: waiting for machine to come up
	I0308 04:04:15.337114  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:15.337748  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:15.337795  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:15.337702  956779 retry.go:31] will retry after 450.953902ms: waiting for machine to come up
	I0308 04:04:15.790523  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:15.791079  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:15.791111  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:15.791019  956779 retry.go:31] will retry after 539.22044ms: waiting for machine to come up
	I0308 04:04:16.331651  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:16.332101  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:16.332126  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:16.332064  956779 retry.go:31] will retry after 756.785663ms: waiting for machine to come up
	I0308 04:04:17.090030  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:17.090401  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:17.090454  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:17.090342  956779 retry.go:31] will retry after 794.859759ms: waiting for machine to come up
	I0308 04:04:17.886639  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:17.887115  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:17.887144  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:17.887061  956779 retry.go:31] will retry after 1.219252022s: waiting for machine to come up
	I0308 04:04:19.108333  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:19.108863  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:19.108894  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:19.108816  956779 retry.go:31] will retry after 1.700842235s: waiting for machine to come up
	I0308 04:04:20.811150  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:20.811704  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:20.811735  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:20.811655  956779 retry.go:31] will retry after 2.197064085s: waiting for machine to come up
	I0308 04:04:23.010692  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:23.011174  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:23.011201  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:23.011101  956779 retry.go:31] will retry after 2.53605479s: waiting for machine to come up
	I0308 04:04:25.551007  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:25.551658  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:25.551716  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:25.551599  956779 retry.go:31] will retry after 2.637016781s: waiting for machine to come up
	I0308 04:04:28.190349  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:28.190825  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:28.190867  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:28.190792  956779 retry.go:31] will retry after 3.238039351s: waiting for machine to come up
	I0308 04:04:31.431204  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:31.431723  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:04:31.431757  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:04:31.431672  956779 retry.go:31] will retry after 4.939229076s: waiting for machine to come up
	I0308 04:04:36.375498  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.376224  956560 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:04:36.376251  956560 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:04:36.376265  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.376682  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808
	I0308 04:04:36.451672  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:04:36.451707  956560 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:04:36.451722  956560 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:04:36.454942  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.455398  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:36.455428  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.455594  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:04:36.455622  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:04:36.455665  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:04:36.455688  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:04:36.455699  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:04:36.581637  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:04:36.581956  956560 main.go:141] libmachine: (old-k8s-version-496808) KVM machine creation complete!
	I0308 04:04:36.582264  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:04:36.583020  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:36.583249  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:36.583458  956560 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0308 04:04:36.583480  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:04:36.584863  956560 main.go:141] libmachine: Detecting operating system of created instance...
	I0308 04:04:36.584881  956560 main.go:141] libmachine: Waiting for SSH to be available...
	I0308 04:04:36.584889  956560 main.go:141] libmachine: Getting to WaitForSSH function...
	I0308 04:04:36.584898  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:36.587209  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.587597  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:36.587641  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.587809  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:36.588009  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.588173  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.588317  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:36.588486  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:36.588693  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:36.588706  956560 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0308 04:04:36.688950  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:04:36.688978  956560 main.go:141] libmachine: Detecting the provisioner...
	I0308 04:04:36.688988  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:36.691619  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.691932  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:36.691992  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.692129  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:36.692336  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.692506  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.692625  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:36.692795  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:36.693036  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:36.693051  956560 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0308 04:04:36.794553  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0308 04:04:36.794654  956560 main.go:141] libmachine: found compatible host: buildroot
	I0308 04:04:36.794667  956560 main.go:141] libmachine: Provisioning with buildroot...
	I0308 04:04:36.794679  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:04:36.794968  956560 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:04:36.795001  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:04:36.795211  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:36.798043  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.798440  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:36.798472  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.798646  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:36.798832  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.798976  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.799095  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:36.799244  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:36.799432  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:36.799445  956560 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:04:36.914974  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:04:36.915005  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:36.917763  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.918100  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:36.918129  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:36.918253  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:36.918478  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.918674  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:36.918859  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:36.919006  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:36.919192  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:36.919216  956560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:04:37.028601  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:04:37.028641  956560 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:04:37.028691  956560 buildroot.go:174] setting up certificates
	I0308 04:04:37.028701  956560 provision.go:84] configureAuth start
	I0308 04:04:37.028717  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:04:37.029035  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:04:37.031866  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.032336  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.032370  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.032568  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.034821  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.035157  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.035185  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.035288  956560 provision.go:143] copyHostCerts
	I0308 04:04:37.035349  956560 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:04:37.035362  956560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:04:37.035433  956560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:04:37.035559  956560 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:04:37.035581  956560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:04:37.035615  956560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:04:37.035710  956560 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:04:37.035721  956560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:04:37.035744  956560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:04:37.035840  956560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:04:37.169594  956560 provision.go:177] copyRemoteCerts
	I0308 04:04:37.169653  956560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:04:37.169682  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.172368  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.172766  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.172790  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.173029  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.173223  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.173435  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.173598  956560 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:04:37.251972  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:04:37.278919  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:04:37.305430  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:04:37.331495  956560 provision.go:87] duration metric: took 302.774505ms to configureAuth
	I0308 04:04:37.331531  956560 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:04:37.331700  956560 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:04:37.331781  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.334485  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.334790  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.334831  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.334960  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.335183  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.335366  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.335512  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.335650  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:37.335828  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:37.335845  956560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:04:37.632903  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:04:37.632930  956560 main.go:141] libmachine: Checking connection to Docker...
	I0308 04:04:37.632939  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetURL
	I0308 04:04:37.634290  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using libvirt version 6000000
	I0308 04:04:37.636516  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.636913  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.636940  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.637100  956560 main.go:141] libmachine: Docker is up and running!
	I0308 04:04:37.637116  956560 main.go:141] libmachine: Reticulating splines...
	I0308 04:04:37.637124  956560 client.go:171] duration metric: took 25.258456958s to LocalClient.Create
	I0308 04:04:37.637146  956560 start.go:167] duration metric: took 25.258564545s to libmachine.API.Create "old-k8s-version-496808"
	I0308 04:04:37.637175  956560 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:04:37.637188  956560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:04:37.637222  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:37.637536  956560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:04:37.637561  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.639739  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.640065  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.640095  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.640254  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.640433  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.640580  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.640750  956560 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:04:37.721368  956560 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:04:37.726204  956560 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:04:37.726232  956560 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:04:37.726304  956560 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:04:37.726375  956560 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:04:37.726463  956560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:04:37.736455  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:04:37.762649  956560 start.go:296] duration metric: took 125.458934ms for postStartSetup
	I0308 04:04:37.762700  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:04:37.763354  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:04:37.766186  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.766543  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.766584  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.766897  956560 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:04:37.767149  956560 start.go:128] duration metric: took 25.412322495s to createHost
	I0308 04:04:37.767188  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.769566  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.769871  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.769892  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.770037  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.770222  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.770394  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.770554  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.770726  956560 main.go:141] libmachine: Using SSH client type: native
	I0308 04:04:37.770938  956560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:04:37.770958  956560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0308 04:04:37.870626  956560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709870677.849663957
	
	I0308 04:04:37.870653  956560 fix.go:216] guest clock: 1709870677.849663957
	I0308 04:04:37.870660  956560 fix.go:229] Guest: 2024-03-08 04:04:37.849663957 +0000 UTC Remote: 2024-03-08 04:04:37.767164091 +0000 UTC m=+34.335914987 (delta=82.499866ms)
	I0308 04:04:37.870679  956560 fix.go:200] guest clock delta is within tolerance: 82.499866ms
	I0308 04:04:37.870684  956560 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 25.516058909s
	I0308 04:04:37.870711  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:37.871025  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:04:37.873951  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.874347  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.874373  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.874507  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:37.875018  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:37.875222  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:04:37.875340  956560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:04:37.875379  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.875402  956560 ssh_runner.go:195] Run: cat /version.json
	I0308 04:04:37.875421  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:04:37.877990  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.879152  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.879193  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.879217  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.879231  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.879441  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.879626  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.879633  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:37.879650  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:37.879785  956560 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:04:37.879889  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:04:37.880052  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:04:37.880180  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:04:37.880286  956560 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:04:37.987076  956560 ssh_runner.go:195] Run: systemctl --version
	I0308 04:04:37.993762  956560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:04:38.347754  956560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:04:38.356057  956560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:04:38.356125  956560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:04:38.380855  956560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:04:38.380883  956560 start.go:494] detecting cgroup driver to use...
	I0308 04:04:38.380956  956560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:04:38.403714  956560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:04:38.421325  956560 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:04:38.421381  956560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:04:38.437824  956560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:04:38.454012  956560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:04:38.596781  956560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:04:38.745214  956560 docker.go:233] disabling docker service ...
	I0308 04:04:38.745319  956560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:04:38.762254  956560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:04:38.778010  956560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:04:38.929994  956560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:04:39.048301  956560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:04:39.064024  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:04:39.085952  956560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:04:39.086044  956560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:04:39.097749  956560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:04:39.097816  956560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:04:39.109414  956560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:04:39.120792  956560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:04:39.132036  956560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:04:39.143417  956560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:04:39.153910  956560 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:04:39.153962  956560 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:04:39.168813  956560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:04:39.179974  956560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:04:39.322754  956560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:04:39.484826  956560 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:04:39.484898  956560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:04:39.490500  956560 start.go:562] Will wait 60s for crictl version
	I0308 04:04:39.490559  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:39.494965  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:04:39.535674  956560 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:04:39.535812  956560 ssh_runner.go:195] Run: crio --version
	I0308 04:04:39.568453  956560 ssh_runner.go:195] Run: crio --version
	I0308 04:04:39.601119  956560 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:04:39.602581  956560 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:04:39.605988  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:39.606469  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:04:29 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:04:39.606494  956560 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:04:39.606779  956560 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:04:39.611994  956560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:04:39.628053  956560 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:04:39.628169  956560 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:04:39.628240  956560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:04:39.667784  956560 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:04:39.667857  956560 ssh_runner.go:195] Run: which lz4
	I0308 04:04:39.672766  956560 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0308 04:04:39.678025  956560 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:04:39.678073  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:04:41.780687  956560 crio.go:444] duration metric: took 2.107964858s to copy over tarball
	I0308 04:04:41.780762  956560 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:04:44.615803  956560 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.835010041s)
	I0308 04:04:44.615839  956560 crio.go:451] duration metric: took 2.835118328s to extract the tarball
	I0308 04:04:44.615850  956560 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:04:44.663178  956560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:04:44.710785  956560 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:04:44.710823  956560 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:04:44.710953  956560 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:04:44.710991  956560 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:04:44.711010  956560 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:04:44.710920  956560 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:04:44.711136  956560 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:04:44.710929  956560 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:04:44.710968  956560 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:04:44.710979  956560 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:04:44.712838  956560 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:04:44.712937  956560 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:04:44.712847  956560 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:04:44.712846  956560 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:04:44.712857  956560 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:04:44.712856  956560 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:04:44.712852  956560 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:04:44.712870  956560 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:04:44.849753  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:04:44.857199  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:04:44.869192  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:04:44.872787  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:04:44.876718  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:04:44.925506  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:04:44.927407  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:04:44.954914  956560 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:04:44.955007  956560 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:04:44.954924  956560 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:04:44.955059  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:44.955068  956560 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:04:44.955108  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.004659  956560 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:04:45.012449  956560 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:04:45.012504  956560 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:04:45.012570  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.041835  956560 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:04:45.041893  956560 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:04:45.041896  956560 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:04:45.041930  956560 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:04:45.041946  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.041984  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.086494  956560 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:04:45.086548  956560 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:04:45.086600  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.100770  956560 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:04:45.100870  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:04:45.100878  956560 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:04:45.100979  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:04:45.100986  956560 ssh_runner.go:195] Run: which crictl
	I0308 04:04:45.214177  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:04:45.214247  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:04:45.214185  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:04:45.214391  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:04:45.214424  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:04:45.214525  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:04:45.214568  956560 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:04:45.326340  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:04:45.326596  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:04:45.351209  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:04:45.351221  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:04:45.351254  956560 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:04:45.351358  956560 cache_images.go:92] duration metric: took 640.512004ms to LoadCachedImages
	W0308 04:04:45.351468  956560 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:04:45.351485  956560 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:04:45.351686  956560 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:04:45.351771  956560 ssh_runner.go:195] Run: crio config
	I0308 04:04:45.406777  956560 cni.go:84] Creating CNI manager for ""
	I0308 04:04:45.406809  956560 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:04:45.406831  956560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:04:45.406854  956560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:04:45.407058  956560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:04:45.407153  956560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:04:45.418823  956560 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:04:45.418903  956560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:04:45.430421  956560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:04:45.449095  956560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:04:45.467992  956560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:04:45.487596  956560 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:04:45.492208  956560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:04:45.506937  956560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:04:45.648717  956560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:04:45.673716  956560 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:04:45.673747  956560 certs.go:194] generating shared ca certs ...
	I0308 04:04:45.673771  956560 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:45.673977  956560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:04:45.674046  956560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:04:45.674059  956560 certs.go:256] generating profile certs ...
	I0308 04:04:45.674132  956560 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:04:45.674155  956560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt with IP's: []
	I0308 04:04:45.901658  956560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt ...
	I0308 04:04:45.901699  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: {Name:mkedb6886e5d6679a08540728db100dbc5f2db9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:45.901913  956560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key ...
	I0308 04:04:45.901939  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key: {Name:mk5cb275a3b33aeb3902f7d702b630ca1c0f28ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:45.902057  956560 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:04:45.902088  956560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt.bb63bcf1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.3]
	I0308 04:04:46.156405  956560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt.bb63bcf1 ...
	I0308 04:04:46.156445  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt.bb63bcf1: {Name:mk31893a0c976dbc1ae64b9059201d98d50a9cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:46.156653  956560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1 ...
	I0308 04:04:46.156676  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1: {Name:mk560f8b3ddf676281b9bac59c53ec37aac6c564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:46.156784  956560 certs.go:381] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt.bb63bcf1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt
	I0308 04:04:46.156893  956560 certs.go:385] copying /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1 -> /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key
	I0308 04:04:46.156975  956560 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:04:46.156995  956560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt with IP's: []
	I0308 04:04:46.363488  956560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt ...
	I0308 04:04:46.363527  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt: {Name:mk55deba8065940fd30ff70d3b91a02d78d93a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:46.363716  956560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key ...
	I0308 04:04:46.363735  956560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key: {Name:mk24d8422e03500cf4b215d9bf2c2750ff5b220c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:04:46.363947  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:04:46.363986  956560 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:04:46.363999  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:04:46.364019  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:04:46.364041  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:04:46.364063  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:04:46.364104  956560 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:04:46.364782  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:04:46.394392  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:04:46.429898  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:04:46.455817  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:04:46.486038  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:04:46.516802  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:04:46.545102  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:04:46.576893  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:04:46.618814  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:04:46.651013  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:04:46.681507  956560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:04:46.710806  956560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:04:46.730522  956560 ssh_runner.go:195] Run: openssl version
	I0308 04:04:46.738026  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:04:46.751460  956560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:04:46.757108  956560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:04:46.757176  956560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:04:46.764502  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:04:46.777126  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:04:46.790453  956560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:04:46.795929  956560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:04:46.795999  956560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:04:46.803384  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:04:46.817092  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:04:46.830593  956560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:04:46.836048  956560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:04:46.836113  956560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:04:46.842566  956560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:04:46.854666  956560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:04:46.859457  956560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 04:04:46.859518  956560 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:04:46.859673  956560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:04:46.859727  956560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:04:46.904062  956560 cri.go:89] found id: ""
	I0308 04:04:46.904160  956560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 04:04:46.914790  956560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:04:46.924948  956560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:04:46.935985  956560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:04:46.936007  956560 kubeadm.go:156] found existing configuration files:
	
	I0308 04:04:46.936051  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:04:46.947364  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:04:46.947433  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:04:46.958148  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:04:46.968585  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:04:46.968649  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:04:46.978905  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:04:46.988792  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:04:46.988856  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:04:46.998920  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:04:47.008963  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:04:47.009029  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:04:47.018973  956560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:04:47.295341  956560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:06:45.281611  956560 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:06:45.281763  956560 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:06:45.283261  956560 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:06:45.283362  956560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:06:45.283471  956560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:06:45.283596  956560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:06:45.283741  956560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:06:45.283848  956560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:06:45.285682  956560 out.go:204]   - Generating certificates and keys ...
	I0308 04:06:45.285780  956560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:06:45.285874  956560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:06:45.285962  956560 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 04:06:45.286063  956560 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 04:06:45.286167  956560 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 04:06:45.286268  956560 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 04:06:45.286345  956560 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 04:06:45.286448  956560 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0308 04:06:45.286498  956560 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 04:06:45.286610  956560 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0308 04:06:45.286668  956560 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 04:06:45.286720  956560 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 04:06:45.286765  956560 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 04:06:45.286827  956560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:06:45.286874  956560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:06:45.286956  956560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:06:45.287027  956560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:06:45.287111  956560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:06:45.287276  956560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:06:45.287382  956560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:06:45.287441  956560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:06:45.287533  956560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:06:45.289650  956560 out.go:204]   - Booting up control plane ...
	I0308 04:06:45.289757  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:06:45.289840  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:06:45.289928  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:06:45.290027  956560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:06:45.290213  956560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:06:45.290260  956560 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:06:45.290317  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:06:45.290483  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:06:45.290545  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:06:45.290763  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:06:45.290857  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:06:45.291137  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:06:45.291242  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:06:45.291516  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:06:45.291619  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:06:45.291870  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:06:45.291880  956560 kubeadm.go:309] 
	I0308 04:06:45.291939  956560 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:06:45.291996  956560 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:06:45.292010  956560 kubeadm.go:309] 
	I0308 04:06:45.292056  956560 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:06:45.292087  956560 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:06:45.292196  956560 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:06:45.292208  956560 kubeadm.go:309] 
	I0308 04:06:45.292353  956560 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:06:45.292415  956560 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:06:45.292462  956560 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:06:45.292472  956560 kubeadm.go:309] 
	I0308 04:06:45.292629  956560 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:06:45.292732  956560 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:06:45.292744  956560 kubeadm.go:309] 
	I0308 04:06:45.292888  956560 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:06:45.292990  956560 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:06:45.293099  956560 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:06:45.293211  956560 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:06:45.293289  956560 kubeadm.go:309] 
	W0308 04:06:45.293388  956560 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-496808] and IPs [192.168.39.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:06:45.293434  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:06:47.558047  956560 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.264572842s)
	I0308 04:06:47.558149  956560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:06:47.575276  956560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:06:47.588818  956560 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:06:47.588847  956560 kubeadm.go:156] found existing configuration files:
	
	I0308 04:06:47.588905  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:06:47.601535  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:06:47.601626  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:06:47.614816  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:06:47.627893  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:06:47.627989  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:06:47.641949  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:06:47.652575  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:06:47.652635  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:06:47.663300  956560 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:06:47.673510  956560 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:06:47.673580  956560 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:06:47.684778  956560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:06:47.765233  956560 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:06:47.765422  956560 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:06:47.935656  956560 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:06:47.935846  956560 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:06:47.936006  956560 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:06:48.151222  956560 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:06:48.153098  956560 out.go:204]   - Generating certificates and keys ...
	I0308 04:06:48.153188  956560 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:06:48.153270  956560 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:06:48.153422  956560 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:06:48.153532  956560 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:06:48.153626  956560 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:06:48.153715  956560 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:06:48.153814  956560 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:06:48.153900  956560 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:06:48.153976  956560 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:06:48.154148  956560 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:06:48.154728  956560 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:06:48.154981  956560 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:06:48.406875  956560 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:06:48.778332  956560 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:06:48.855424  956560 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:06:49.230365  956560 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:06:49.247306  956560 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:06:49.248489  956560 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:06:49.248570  956560 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:06:49.398400  956560 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:06:49.400189  956560 out.go:204]   - Booting up control plane ...
	I0308 04:06:49.400310  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:06:49.404745  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:06:49.413477  956560 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:06:49.414689  956560 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:06:49.417743  956560 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:07:29.420509  956560 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:07:29.421464  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:07:29.421780  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:07:34.422672  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:07:34.422909  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:07:44.423596  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:07:44.423819  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:08:04.425107  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:08:04.425344  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:08:44.425116  956560 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:08:44.425394  956560 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:08:44.425414  956560 kubeadm.go:309] 
	I0308 04:08:44.425457  956560 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:08:44.425726  956560 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:08:44.425742  956560 kubeadm.go:309] 
	I0308 04:08:44.425789  956560 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:08:44.425848  956560 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:08:44.425991  956560 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:08:44.426004  956560 kubeadm.go:309] 
	I0308 04:08:44.426145  956560 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:08:44.426215  956560 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:08:44.426292  956560 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:08:44.426313  956560 kubeadm.go:309] 
	I0308 04:08:44.426460  956560 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:08:44.426585  956560 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:08:44.426603  956560 kubeadm.go:309] 
	I0308 04:08:44.426730  956560 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:08:44.426850  956560 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:08:44.426954  956560 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:08:44.427054  956560 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:08:44.427074  956560 kubeadm.go:309] 
	I0308 04:08:44.428527  956560 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:08:44.428629  956560 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:08:44.428716  956560 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:08:44.428790  956560 kubeadm.go:393] duration metric: took 3m57.569278513s to StartCluster
	I0308 04:08:44.428836  956560 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:08:44.428897  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:08:44.486126  956560 cri.go:89] found id: ""
	I0308 04:08:44.486172  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.486181  956560 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:08:44.486187  956560 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:08:44.486240  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:08:44.533687  956560 cri.go:89] found id: ""
	I0308 04:08:44.533717  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.533726  956560 logs.go:278] No container was found matching "etcd"
	I0308 04:08:44.533739  956560 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:08:44.533795  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:08:44.572560  956560 cri.go:89] found id: ""
	I0308 04:08:44.572592  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.572605  956560 logs.go:278] No container was found matching "coredns"
	I0308 04:08:44.572612  956560 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:08:44.572668  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:08:44.613212  956560 cri.go:89] found id: ""
	I0308 04:08:44.613250  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.613262  956560 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:08:44.613271  956560 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:08:44.613363  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:08:44.651372  956560 cri.go:89] found id: ""
	I0308 04:08:44.651407  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.651418  956560 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:08:44.651428  956560 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:08:44.651500  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:08:44.691605  956560 cri.go:89] found id: ""
	I0308 04:08:44.691636  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.691659  956560 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:08:44.691666  956560 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:08:44.691722  956560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:08:44.729883  956560 cri.go:89] found id: ""
	I0308 04:08:44.729911  956560 logs.go:276] 0 containers: []
	W0308 04:08:44.729918  956560 logs.go:278] No container was found matching "kindnet"
	I0308 04:08:44.729929  956560 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:08:44.729947  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:08:44.863555  956560 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:08:44.863581  956560 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:08:44.863599  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:08:44.960270  956560 logs.go:123] Gathering logs for container status ...
	I0308 04:08:44.960321  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:08:45.022079  956560 logs.go:123] Gathering logs for kubelet ...
	I0308 04:08:45.022110  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:08:45.072362  956560 logs.go:123] Gathering logs for dmesg ...
	I0308 04:08:45.072400  956560 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:08:45.086741  956560 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:08:45.086787  956560 out.go:239] * 
	* 
	W0308 04:08:45.086852  956560 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:08:45.086882  956560 out.go:239] * 
	* 
	W0308 04:08:45.087973  956560 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:08:45.091881  956560 out.go:177] 
	W0308 04:08:45.093424  956560 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:08:45.093481  956560 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:08:45.093504  956560 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:08:45.095097  956560 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 6 (239.939356ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:08:45.376328  958964 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-496808" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (281.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-477676 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-477676 --alsologtostderr -v=3: exit status 82 (2m0.561420631s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-477676"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:06:40.739305  958292 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:06:40.739589  958292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:06:40.739601  958292 out.go:304] Setting ErrFile to fd 2...
	I0308 04:06:40.739608  958292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:06:40.739859  958292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:06:40.740106  958292 out.go:298] Setting JSON to false
	I0308 04:06:40.740190  958292 mustload.go:65] Loading cluster: no-preload-477676
	I0308 04:06:40.740508  958292 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:06:40.740570  958292 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:06:40.740729  958292 mustload.go:65] Loading cluster: no-preload-477676
	I0308 04:06:40.740831  958292 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:06:40.740856  958292 stop.go:39] StopHost: no-preload-477676
	I0308 04:06:40.741215  958292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:06:40.741263  958292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:06:40.757689  958292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0308 04:06:40.758175  958292 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:06:40.758844  958292 main.go:141] libmachine: Using API Version  1
	I0308 04:06:40.758872  958292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:06:40.759332  958292 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:06:40.761255  958292 out.go:177] * Stopping node "no-preload-477676"  ...
	I0308 04:06:40.762461  958292 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 04:06:40.762500  958292 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:06:40.762857  958292 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 04:06:40.762882  958292 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:06:40.766073  958292 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:06:40.766532  958292 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:06:40.766560  958292 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:06:40.766728  958292 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:06:40.766937  958292 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:06:40.767100  958292 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:06:40.767227  958292 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:06:40.918285  958292 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 04:06:40.981483  958292 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 04:06:41.043986  958292 main.go:141] libmachine: Stopping "no-preload-477676"...
	I0308 04:06:41.044035  958292 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:06:41.045925  958292 main.go:141] libmachine: (no-preload-477676) Calling .Stop
	I0308 04:06:41.049557  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 0/120
	I0308 04:06:42.051574  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 1/120
	I0308 04:06:43.053033  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 2/120
	I0308 04:06:44.054768  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 3/120
	I0308 04:06:45.056473  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 4/120
	I0308 04:06:46.058943  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 5/120
	I0308 04:06:47.060455  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 6/120
	I0308 04:06:48.061984  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 7/120
	I0308 04:06:49.063791  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 8/120
	I0308 04:06:50.065208  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 9/120
	I0308 04:06:51.067703  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 10/120
	I0308 04:06:52.069226  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 11/120
	I0308 04:06:53.070552  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 12/120
	I0308 04:06:54.071961  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 13/120
	I0308 04:06:55.073387  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 14/120
	I0308 04:06:56.075307  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 15/120
	I0308 04:06:57.076653  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 16/120
	I0308 04:06:58.077983  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 17/120
	I0308 04:06:59.079450  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 18/120
	I0308 04:07:00.080755  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 19/120
	I0308 04:07:01.083020  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 20/120
	I0308 04:07:02.084564  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 21/120
	I0308 04:07:03.086237  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 22/120
	I0308 04:07:04.087826  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 23/120
	I0308 04:07:05.089396  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 24/120
	I0308 04:07:06.091475  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 25/120
	I0308 04:07:07.093711  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 26/120
	I0308 04:07:08.096094  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 27/120
	I0308 04:07:09.097655  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 28/120
	I0308 04:07:10.099111  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 29/120
	I0308 04:07:11.101333  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 30/120
	I0308 04:07:12.102794  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 31/120
	I0308 04:07:13.104391  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 32/120
	I0308 04:07:14.105827  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 33/120
	I0308 04:07:15.107135  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 34/120
	I0308 04:07:16.109308  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 35/120
	I0308 04:07:17.110655  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 36/120
	I0308 04:07:18.112016  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 37/120
	I0308 04:07:19.113216  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 38/120
	I0308 04:07:20.114712  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 39/120
	I0308 04:07:21.116588  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 40/120
	I0308 04:07:22.117858  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 41/120
	I0308 04:07:23.119253  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 42/120
	I0308 04:07:24.120664  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 43/120
	I0308 04:07:25.122179  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 44/120
	I0308 04:07:26.124287  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 45/120
	I0308 04:07:27.125605  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 46/120
	I0308 04:07:28.127851  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 47/120
	I0308 04:07:29.129317  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 48/120
	I0308 04:07:30.130936  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 49/120
	I0308 04:07:31.132878  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 50/120
	I0308 04:07:32.134484  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 51/120
	I0308 04:07:33.135907  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 52/120
	I0308 04:07:34.137297  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 53/120
	I0308 04:07:35.138546  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 54/120
	I0308 04:07:36.140470  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 55/120
	I0308 04:07:37.142031  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 56/120
	I0308 04:07:38.143576  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 57/120
	I0308 04:07:39.144968  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 58/120
	I0308 04:07:40.146381  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 59/120
	I0308 04:07:41.148686  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 60/120
	I0308 04:07:42.150232  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 61/120
	I0308 04:07:43.151427  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 62/120
	I0308 04:07:44.152686  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 63/120
	I0308 04:07:45.153922  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 64/120
	I0308 04:07:46.155578  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 65/120
	I0308 04:07:47.156675  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 66/120
	I0308 04:07:48.157975  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 67/120
	I0308 04:07:49.159456  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 68/120
	I0308 04:07:50.160523  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 69/120
	I0308 04:07:51.162448  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 70/120
	I0308 04:07:52.163643  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 71/120
	I0308 04:07:53.164723  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 72/120
	I0308 04:07:54.165823  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 73/120
	I0308 04:07:55.166823  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 74/120
	I0308 04:07:56.168681  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 75/120
	I0308 04:07:57.169860  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 76/120
	I0308 04:07:58.170950  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 77/120
	I0308 04:07:59.172073  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 78/120
	I0308 04:08:00.173180  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 79/120
	I0308 04:08:01.175024  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 80/120
	I0308 04:08:02.176208  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 81/120
	I0308 04:08:03.177505  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 82/120
	I0308 04:08:04.178664  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 83/120
	I0308 04:08:05.179716  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 84/120
	I0308 04:08:06.181527  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 85/120
	I0308 04:08:07.182791  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 86/120
	I0308 04:08:08.184034  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 87/120
	I0308 04:08:09.185104  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 88/120
	I0308 04:08:10.186239  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 89/120
	I0308 04:08:11.188530  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 90/120
	I0308 04:08:12.189733  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 91/120
	I0308 04:08:13.190980  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 92/120
	I0308 04:08:14.192185  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 93/120
	I0308 04:08:15.193270  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 94/120
	I0308 04:08:16.194942  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 95/120
	I0308 04:08:17.196026  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 96/120
	I0308 04:08:18.197077  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 97/120
	I0308 04:08:19.198209  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 98/120
	I0308 04:08:20.199533  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 99/120
	I0308 04:08:21.201464  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 100/120
	I0308 04:08:22.203740  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 101/120
	I0308 04:08:23.204969  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 102/120
	I0308 04:08:24.206336  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 103/120
	I0308 04:08:25.207548  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 104/120
	I0308 04:08:26.209644  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 105/120
	I0308 04:08:27.211633  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 106/120
	I0308 04:08:28.213094  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 107/120
	I0308 04:08:29.214110  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 108/120
	I0308 04:08:30.215449  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 109/120
	I0308 04:08:31.217255  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 110/120
	I0308 04:08:32.218374  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 111/120
	I0308 04:08:33.219390  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 112/120
	I0308 04:08:34.220746  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 113/120
	I0308 04:08:35.221896  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 114/120
	I0308 04:08:36.223627  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 115/120
	I0308 04:08:37.224874  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 116/120
	I0308 04:08:38.225952  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 117/120
	I0308 04:08:39.227158  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 118/120
	I0308 04:08:40.228249  958292 main.go:141] libmachine: (no-preload-477676) Waiting for machine to stop 119/120
	I0308 04:08:41.229397  958292 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 04:08:41.229466  958292 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0308 04:08:41.231476  958292 out.go:177] 
	W0308 04:08:41.232898  958292 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0308 04:08:41.232916  958292 out.go:239] * 
	* 
	W0308 04:08:41.239667  958292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:08:41.241190  958292 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-477676 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676: exit status 3 (18.455039675s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:08:59.697676  958932 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host
	E0308 04:08:59.697699  958932 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-477676" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-416634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-416634 --alsologtostderr -v=3: exit status 82 (2m0.508615497s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-416634"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:06:51.409859  958457 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:06:51.410130  958457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:06:51.410140  958457 out.go:304] Setting ErrFile to fd 2...
	I0308 04:06:51.410144  958457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:06:51.410768  958457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:06:51.411288  958457 out.go:298] Setting JSON to false
	I0308 04:06:51.411376  958457 mustload.go:65] Loading cluster: embed-certs-416634
	I0308 04:06:51.412021  958457 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:06:51.412105  958457 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:06:51.412292  958457 mustload.go:65] Loading cluster: embed-certs-416634
	I0308 04:06:51.412426  958457 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:06:51.412462  958457 stop.go:39] StopHost: embed-certs-416634
	I0308 04:06:51.412878  958457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:06:51.412928  958457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:06:51.427938  958457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0308 04:06:51.428350  958457 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:06:51.428890  958457 main.go:141] libmachine: Using API Version  1
	I0308 04:06:51.428915  958457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:06:51.429343  958457 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:06:51.431926  958457 out.go:177] * Stopping node "embed-certs-416634"  ...
	I0308 04:06:51.433787  958457 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 04:06:51.433816  958457 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:06:51.434053  958457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 04:06:51.434078  958457 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:06:51.437183  958457 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:06:51.437682  958457 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:06:51.437711  958457 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:06:51.437855  958457 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:06:51.438046  958457 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:06:51.438209  958457 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:06:51.438382  958457 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:06:51.532142  958457 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 04:06:51.593137  958457 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 04:06:51.649300  958457 main.go:141] libmachine: Stopping "embed-certs-416634"...
	I0308 04:06:51.649334  958457 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:06:51.650908  958457 main.go:141] libmachine: (embed-certs-416634) Calling .Stop
	I0308 04:06:51.654587  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 0/120
	I0308 04:06:52.656030  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 1/120
	I0308 04:06:53.657409  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 2/120
	I0308 04:06:54.658666  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 3/120
	I0308 04:06:55.660272  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 4/120
	I0308 04:06:56.662282  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 5/120
	I0308 04:06:57.663784  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 6/120
	I0308 04:06:58.665048  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 7/120
	I0308 04:06:59.666512  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 8/120
	I0308 04:07:00.668178  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 9/120
	I0308 04:07:01.670194  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 10/120
	I0308 04:07:02.671691  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 11/120
	I0308 04:07:03.673299  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 12/120
	I0308 04:07:04.675076  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 13/120
	I0308 04:07:05.676792  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 14/120
	I0308 04:07:06.679097  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 15/120
	I0308 04:07:07.681211  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 16/120
	I0308 04:07:08.682522  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 17/120
	I0308 04:07:09.684136  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 18/120
	I0308 04:07:10.685596  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 19/120
	I0308 04:07:11.688001  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 20/120
	I0308 04:07:12.689585  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 21/120
	I0308 04:07:13.691142  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 22/120
	I0308 04:07:14.692927  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 23/120
	I0308 04:07:15.694283  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 24/120
	I0308 04:07:16.696563  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 25/120
	I0308 04:07:17.698066  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 26/120
	I0308 04:07:18.699442  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 27/120
	I0308 04:07:19.700903  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 28/120
	I0308 04:07:20.702208  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 29/120
	I0308 04:07:21.704224  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 30/120
	I0308 04:07:22.705628  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 31/120
	I0308 04:07:23.707910  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 32/120
	I0308 04:07:24.709345  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 33/120
	I0308 04:07:25.710984  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 34/120
	I0308 04:07:26.713187  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 35/120
	I0308 04:07:27.714788  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 36/120
	I0308 04:07:28.716193  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 37/120
	I0308 04:07:29.717809  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 38/120
	I0308 04:07:30.720194  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 39/120
	I0308 04:07:31.721655  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 40/120
	I0308 04:07:32.723108  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 41/120
	I0308 04:07:33.724485  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 42/120
	I0308 04:07:34.725859  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 43/120
	I0308 04:07:35.727779  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 44/120
	I0308 04:07:36.729786  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 45/120
	I0308 04:07:37.731803  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 46/120
	I0308 04:07:38.733494  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 47/120
	I0308 04:07:39.734840  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 48/120
	I0308 04:07:40.736648  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 49/120
	I0308 04:07:41.738747  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 50/120
	I0308 04:07:42.740362  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 51/120
	I0308 04:07:43.741811  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 52/120
	I0308 04:07:44.743544  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 53/120
	I0308 04:07:45.745619  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 54/120
	I0308 04:07:46.747161  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 55/120
	I0308 04:07:47.748489  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 56/120
	I0308 04:07:48.749823  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 57/120
	I0308 04:07:49.751390  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 58/120
	I0308 04:07:50.752610  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 59/120
	I0308 04:07:51.754791  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 60/120
	I0308 04:07:52.756296  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 61/120
	I0308 04:07:53.757690  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 62/120
	I0308 04:07:54.758888  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 63/120
	I0308 04:07:55.760239  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 64/120
	I0308 04:07:56.762206  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 65/120
	I0308 04:07:57.763542  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 66/120
	I0308 04:07:58.765185  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 67/120
	I0308 04:07:59.766544  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 68/120
	I0308 04:08:00.767903  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 69/120
	I0308 04:08:01.769901  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 70/120
	I0308 04:08:02.771891  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 71/120
	I0308 04:08:03.773452  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 72/120
	I0308 04:08:04.775928  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 73/120
	I0308 04:08:05.777237  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 74/120
	I0308 04:08:06.778607  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 75/120
	I0308 04:08:07.779784  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 76/120
	I0308 04:08:08.781148  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 77/120
	I0308 04:08:09.782429  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 78/120
	I0308 04:08:10.783640  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 79/120
	I0308 04:08:11.785807  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 80/120
	I0308 04:08:12.787331  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 81/120
	I0308 04:08:13.788704  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 82/120
	I0308 04:08:14.790223  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 83/120
	I0308 04:08:15.791440  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 84/120
	I0308 04:08:16.793556  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 85/120
	I0308 04:08:17.794973  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 86/120
	I0308 04:08:18.796258  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 87/120
	I0308 04:08:19.797970  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 88/120
	I0308 04:08:20.799599  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 89/120
	I0308 04:08:21.801727  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 90/120
	I0308 04:08:22.803467  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 91/120
	I0308 04:08:23.805562  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 92/120
	I0308 04:08:24.806934  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 93/120
	I0308 04:08:25.808477  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 94/120
	I0308 04:08:26.810791  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 95/120
	I0308 04:08:27.812254  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 96/120
	I0308 04:08:28.813738  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 97/120
	I0308 04:08:29.815273  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 98/120
	I0308 04:08:30.816715  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 99/120
	I0308 04:08:31.818894  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 100/120
	I0308 04:08:32.820383  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 101/120
	I0308 04:08:33.821587  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 102/120
	I0308 04:08:34.823082  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 103/120
	I0308 04:08:35.824337  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 104/120
	I0308 04:08:36.825703  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 105/120
	I0308 04:08:37.827152  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 106/120
	I0308 04:08:38.828342  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 107/120
	I0308 04:08:39.829814  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 108/120
	I0308 04:08:40.831088  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 109/120
	I0308 04:08:41.832993  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 110/120
	I0308 04:08:42.834495  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 111/120
	I0308 04:08:43.835752  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 112/120
	I0308 04:08:44.837369  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 113/120
	I0308 04:08:45.838637  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 114/120
	I0308 04:08:46.840983  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 115/120
	I0308 04:08:47.842676  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 116/120
	I0308 04:08:48.844033  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 117/120
	I0308 04:08:49.845446  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 118/120
	I0308 04:08:50.846936  958457 main.go:141] libmachine: (embed-certs-416634) Waiting for machine to stop 119/120
	I0308 04:08:51.848331  958457 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 04:08:51.848414  958457 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0308 04:08:51.850420  958457 out.go:177] 
	W0308 04:08:51.852351  958457 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0308 04:08:51.852370  958457 out.go:239] * 
	* 
	W0308 04:08:51.858862  958457 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:08:51.860266  958457 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-416634 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634: exit status 3 (18.58701696s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:09:10.449639  959116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0308 04:09:10.449660  959116 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-416634" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-968261 --alsologtostderr -v=3
E0308 04:07:52.008422  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 04:08:32.256686  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-968261 --alsologtostderr -v=3: exit status 82 (2m0.529206937s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-968261"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:07:41.853307  958753 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:07:41.853416  958753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:07:41.853424  958753 out.go:304] Setting ErrFile to fd 2...
	I0308 04:07:41.853428  958753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:07:41.853629  958753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:07:41.853857  958753 out.go:298] Setting JSON to false
	I0308 04:07:41.853924  958753 mustload.go:65] Loading cluster: default-k8s-diff-port-968261
	I0308 04:07:41.854256  958753 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:07:41.854322  958753 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:07:41.854475  958753 mustload.go:65] Loading cluster: default-k8s-diff-port-968261
	I0308 04:07:41.854588  958753 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:07:41.854618  958753 stop.go:39] StopHost: default-k8s-diff-port-968261
	I0308 04:07:41.854972  958753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:07:41.855031  958753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:07:41.869583  958753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0308 04:07:41.870065  958753 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:07:41.870626  958753 main.go:141] libmachine: Using API Version  1
	I0308 04:07:41.870651  958753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:07:41.871090  958753 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:07:41.873670  958753 out.go:177] * Stopping node "default-k8s-diff-port-968261"  ...
	I0308 04:07:41.875073  958753 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0308 04:07:41.875104  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:07:41.875335  958753 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0308 04:07:41.875364  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:07:41.878142  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:07:41.878552  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:06:49 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:07:41.878587  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:07:41.878715  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:07:41.878889  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:07:41.879045  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:07:41.879318  958753 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:07:41.987888  958753 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0308 04:07:42.044399  958753 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0308 04:07:42.118674  958753 main.go:141] libmachine: Stopping "default-k8s-diff-port-968261"...
	I0308 04:07:42.118701  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:07:42.120418  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Stop
	I0308 04:07:42.123985  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 0/120
	I0308 04:07:43.125318  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 1/120
	I0308 04:07:44.126635  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 2/120
	I0308 04:07:45.128415  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 3/120
	I0308 04:07:46.129753  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 4/120
	I0308 04:07:47.131668  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 5/120
	I0308 04:07:48.132968  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 6/120
	I0308 04:07:49.134529  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 7/120
	I0308 04:07:50.135935  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 8/120
	I0308 04:07:51.137487  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 9/120
	I0308 04:07:52.139767  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 10/120
	I0308 04:07:53.141224  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 11/120
	I0308 04:07:54.142504  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 12/120
	I0308 04:07:55.143870  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 13/120
	I0308 04:07:56.145124  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 14/120
	I0308 04:07:57.147275  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 15/120
	I0308 04:07:58.148657  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 16/120
	I0308 04:07:59.150188  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 17/120
	I0308 04:08:00.151594  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 18/120
	I0308 04:08:01.152966  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 19/120
	I0308 04:08:02.155036  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 20/120
	I0308 04:08:03.156653  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 21/120
	I0308 04:08:04.158068  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 22/120
	I0308 04:08:05.159553  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 23/120
	I0308 04:08:06.160833  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 24/120
	I0308 04:08:07.162807  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 25/120
	I0308 04:08:08.164244  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 26/120
	I0308 04:08:09.165543  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 27/120
	I0308 04:08:10.166894  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 28/120
	I0308 04:08:11.168463  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 29/120
	I0308 04:08:12.170702  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 30/120
	I0308 04:08:13.172145  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 31/120
	I0308 04:08:14.173510  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 32/120
	I0308 04:08:15.174825  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 33/120
	I0308 04:08:16.176094  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 34/120
	I0308 04:08:17.178079  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 35/120
	I0308 04:08:18.179334  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 36/120
	I0308 04:08:19.180844  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 37/120
	I0308 04:08:20.182471  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 38/120
	I0308 04:08:21.184001  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 39/120
	I0308 04:08:22.186306  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 40/120
	I0308 04:08:23.187731  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 41/120
	I0308 04:08:24.189239  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 42/120
	I0308 04:08:25.190818  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 43/120
	I0308 04:08:26.192383  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 44/120
	I0308 04:08:27.194552  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 45/120
	I0308 04:08:28.196027  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 46/120
	I0308 04:08:29.197640  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 47/120
	I0308 04:08:30.199133  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 48/120
	I0308 04:08:31.200505  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 49/120
	I0308 04:08:32.202789  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 50/120
	I0308 04:08:33.204239  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 51/120
	I0308 04:08:34.205699  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 52/120
	I0308 04:08:35.207181  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 53/120
	I0308 04:08:36.208588  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 54/120
	I0308 04:08:37.210706  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 55/120
	I0308 04:08:38.212403  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 56/120
	I0308 04:08:39.213859  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 57/120
	I0308 04:08:40.215224  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 58/120
	I0308 04:08:41.216700  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 59/120
	I0308 04:08:42.219363  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 60/120
	I0308 04:08:43.220987  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 61/120
	I0308 04:08:44.222384  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 62/120
	I0308 04:08:45.223762  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 63/120
	I0308 04:08:46.225808  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 64/120
	I0308 04:08:47.227593  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 65/120
	I0308 04:08:48.229058  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 66/120
	I0308 04:08:49.230741  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 67/120
	I0308 04:08:50.232162  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 68/120
	I0308 04:08:51.233712  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 69/120
	I0308 04:08:52.235812  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 70/120
	I0308 04:08:53.237346  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 71/120
	I0308 04:08:54.238768  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 72/120
	I0308 04:08:55.240139  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 73/120
	I0308 04:08:56.241501  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 74/120
	I0308 04:08:57.243554  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 75/120
	I0308 04:08:58.245037  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 76/120
	I0308 04:08:59.246517  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 77/120
	I0308 04:09:00.248023  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 78/120
	I0308 04:09:01.249408  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 79/120
	I0308 04:09:02.251769  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 80/120
	I0308 04:09:03.253329  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 81/120
	I0308 04:09:04.254605  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 82/120
	I0308 04:09:05.255934  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 83/120
	I0308 04:09:06.257487  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 84/120
	I0308 04:09:07.259648  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 85/120
	I0308 04:09:08.261396  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 86/120
	I0308 04:09:09.262977  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 87/120
	I0308 04:09:10.264469  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 88/120
	I0308 04:09:11.266079  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 89/120
	I0308 04:09:12.268008  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 90/120
	I0308 04:09:13.269636  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 91/120
	I0308 04:09:14.271118  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 92/120
	I0308 04:09:15.272720  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 93/120
	I0308 04:09:16.274342  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 94/120
	I0308 04:09:17.276385  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 95/120
	I0308 04:09:18.277862  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 96/120
	I0308 04:09:19.279286  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 97/120
	I0308 04:09:20.280893  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 98/120
	I0308 04:09:21.282184  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 99/120
	I0308 04:09:22.284666  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 100/120
	I0308 04:09:23.286133  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 101/120
	I0308 04:09:24.287446  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 102/120
	I0308 04:09:25.289172  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 103/120
	I0308 04:09:26.290606  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 104/120
	I0308 04:09:27.292711  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 105/120
	I0308 04:09:28.293998  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 106/120
	I0308 04:09:29.295421  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 107/120
	I0308 04:09:30.296731  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 108/120
	I0308 04:09:31.298099  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 109/120
	I0308 04:09:32.300299  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 110/120
	I0308 04:09:33.301730  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 111/120
	I0308 04:09:34.303869  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 112/120
	I0308 04:09:35.305179  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 113/120
	I0308 04:09:36.306625  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 114/120
	I0308 04:09:37.308738  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 115/120
	I0308 04:09:38.310213  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 116/120
	I0308 04:09:39.311596  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 117/120
	I0308 04:09:40.312843  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 118/120
	I0308 04:09:41.314217  958753 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for machine to stop 119/120
	I0308 04:09:42.315295  958753 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0308 04:09:42.315363  958753 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0308 04:09:42.317397  958753 out.go:177] 
	W0308 04:09:42.318740  958753 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0308 04:09:42.318753  958753 out.go:239] * 
	* 
	W0308 04:09:42.325305  958753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:09:42.326617  958753 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-968261 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261: exit status 3 (18.552901392s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:10:00.881590  959499 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host
	E0308 04:10:00.881615  959499 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-968261" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-496808 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-496808 create -f testdata/busybox.yaml: exit status 1 (46.09918ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-496808" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-496808 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 6 (252.828241ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:08:45.672190  959004 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-496808" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 6 (247.33722ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:08:45.926335  959035 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-496808" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-496808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-496808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.382467639s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-496808 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-496808 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-496808 describe deploy/metrics-server -n kube-system: exit status 1 (44.447455ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-496808" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-496808 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 6 (230.618819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:10:15.582936  959758 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-496808" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676: exit status 3 (3.167712414s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:09:02.865707  959160 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host
	E0308 04:09:02.865729  959160 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-477676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-477676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156469283s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-477676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676: exit status 3 (3.059235208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:09:12.081681  959231 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host
	E0308 04:09:12.081706  959231 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.214:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-477676" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634: exit status 3 (3.168332269s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:09:13.617854  959261 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0308 04:09:13.617885  959261 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-416634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-416634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156337455s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-416634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634: exit status 3 (3.058913376s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:09:22.833666  959377 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0308 04:09:22.833687  959377 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-416634" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261: exit status 3 (3.168093766s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:10:04.049742  959601 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host
	E0308 04:10:04.049764  959601 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-968261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-968261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157640427s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-968261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261: exit status 3 (3.058042766s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0308 04:10:13.265682  959672 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host
	E0308 04:10:13.265707  959672 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-968261" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (768.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0308 04:12:52.008116  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 04:13:32.256276  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 04:14:55.306380  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 04:17:52.008424  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 04:18:32.257159  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m44.907944104s)

                                                
                                                
-- stdout --
	* [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	* 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	* 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-496808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (291.544645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25: (1.592719863s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.836433683Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709871785836361455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66746758-4061-4a7c-a614-0ce1ed2f5b7b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.838181402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fae32d69-2abe-44bd-a16e-218b577a37a6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.838248725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fae32d69-2abe-44bd-a16e-218b577a37a6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.838302160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fae32d69-2abe-44bd-a16e-218b577a37a6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.879115539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef59c4fc-130f-44e5-bf79-e6939956a057 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.879214979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef59c4fc-130f-44e5-bf79-e6939956a057 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.881123853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebfa3680-54fd-4587-abcf-4f7111036d2f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.881555182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709871785881532775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebfa3680-54fd-4587-abcf-4f7111036d2f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.882328735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8cd09be-b6b9-478b-a384-38fefff6009f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.882411215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8cd09be-b6b9-478b-a384-38fefff6009f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.882469196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a8cd09be-b6b9-478b-a384-38fefff6009f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.919052034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9be68a8-4ea5-4451-9ec5-2ac655050599 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.919148086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9be68a8-4ea5-4451-9ec5-2ac655050599 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.920688022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=984228a8-329a-4fba-9a81-d13d17a2b34a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.921138958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709871785921110457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=984228a8-329a-4fba-9a81-d13d17a2b34a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.921585934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=582f9830-ab3e-4b86-b2b2-750d4cd542f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.921639337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=582f9830-ab3e-4b86-b2b2-750d4cd542f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.921668974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=582f9830-ab3e-4b86-b2b2-750d4cd542f3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.955354422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5b1dee7-4097-4308-8228-96f5562192fc name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.955425190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5b1dee7-4097-4308-8228-96f5562192fc name=/runtime.v1.RuntimeService/Version
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.956323897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8747cf69-1df4-4585-919e-912843c53214 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.956719338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709871785956698037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8747cf69-1df4-4585-919e-912843c53214 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.957355608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b84c9abd-28ff-4063-9a90-a4b2e7891976 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.957406702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b84c9abd-28ff-4063-9a90-a4b2e7891976 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:23:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:23:05.957444678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b84c9abd-28ff-4063-9a90-a4b2e7891976 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar 8 04:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053945] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.875570] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.587428] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.467385] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.950443] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.070135] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073031] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.179936] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.161996] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.305208] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Mar 8 04:15] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.072099] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.055797] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.463903] kauditd_printk_skb: 46 callbacks suppressed
	[Mar 8 04:19] systemd-fstab-generator[5010]: Ignoring "noauto" option for root device
	[Mar 8 04:21] systemd-fstab-generator[5289]: Ignoring "noauto" option for root device
	[  +0.072080] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:23:06 up 8 min,  0 users,  load average: 0.27, 0.16, 0.10
	Linux old-k8s-version-496808 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0005385a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008454a0, 0x24, 0x60, 0x7f69c457d690, 0x118, ...)
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: net/http.(*Transport).dial(0xc000858000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0008454a0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: net/http.(*Transport).dialConn(0xc000858000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0000c6480, 0x5, 0xc0008454a0, 0x24, 0x0, 0xc0007b4ea0, ...)
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: net/http.(*Transport).dialConnFor(0xc000858000, 0xc000784fd0)
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: created by net/http.(*Transport).queueForDial
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: goroutine 161 [select]:
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0001d3f80, 0xc000b79a80, 0xc0008a5740, 0xc0008a56e0)
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]: created by net.(*netFD).connect
	Mar 08 04:23:03 old-k8s-version-496808 kubelet[5468]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Mar 08 04:23:03 old-k8s-version-496808 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 08 04:23:03 old-k8s-version-496808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 08 04:23:04 old-k8s-version-496808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 08 04:23:04 old-k8s-version-496808 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 08 04:23:04 old-k8s-version-496808 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 08 04:23:04 old-k8s-version-496808 kubelet[5533]: I0308 04:23:04.510025    5533 server.go:416] Version: v1.20.0
	Mar 08 04:23:04 old-k8s-version-496808 kubelet[5533]: I0308 04:23:04.510268    5533 server.go:837] Client rotation is on, will bootstrap in background
	Mar 08 04:23:04 old-k8s-version-496808 kubelet[5533]: I0308 04:23:04.512171    5533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 08 04:23:04 old-k8s-version-496808 kubelet[5533]: W0308 04:23:04.513218    5533 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 08 04:23:04 old-k8s-version-496808 kubelet[5533]: I0308 04:23:04.513560    5533 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (278.682386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-496808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (768.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0308 04:19:15.055181  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:28:11.49539231 +0000 UTC m=+5544.534301338
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-968261 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-968261 logs -n 25: (2.058259475s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:28:12 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:12.980610743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872092980592581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f54fb4ca-ad29-4826-9977-44cd574a0eac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:12 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:12.981383492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5653584d-e5d6-47c7-b8a2-17298edf84e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:12 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:12.981462553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5653584d-e5d6-47c7-b8a2-17298edf84e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:12 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:12.981671827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5653584d-e5d6-47c7-b8a2-17298edf84e4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.020966003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b705e39-8851-4c89-89be-eeecf1695293 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.021065866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b705e39-8851-4c89-89be-eeecf1695293 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.022315269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85d8849a-9f05-4b31-915c-fe390b17d7c8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.022724303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872093022700974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85d8849a-9f05-4b31-915c-fe390b17d7c8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.023365010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fe3e22d-e02d-4ad6-a4c2-8386ace83e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.023420212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fe3e22d-e02d-4ad6-a4c2-8386ace83e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.023609301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fe3e22d-e02d-4ad6-a4c2-8386ace83e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.066731475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f473427-92bf-41d8-9824-ccfcea301db0 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.066909223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f473427-92bf-41d8-9824-ccfcea301db0 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.068517212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcf2bcfc-da7a-41df-8a9c-2b3332bf556c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.069072144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872093069048303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcf2bcfc-da7a-41df-8a9c-2b3332bf556c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.069674323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a370e12e-31e0-4178-b0ea-3a32be640fd5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.069730164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a370e12e-31e0-4178-b0ea-3a32be640fd5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.070032837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a370e12e-31e0-4178-b0ea-3a32be640fd5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.103580175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=544102f2-6fda-40a0-9c38-177697090110 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.103673503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=544102f2-6fda-40a0-9c38-177697090110 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.105274888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b502044-f929-416f-b954-e925c428007b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.105703118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872093105674481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b502044-f929-416f-b954-e925c428007b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.106603224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d395ba25-c1be-4e6c-972c-4a4a74fcd909 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.106679355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d395ba25-c1be-4e6c-972c-4a4a74fcd909 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:13 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:28:13.106939324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d395ba25-c1be-4e6c-972c-4a4a74fcd909 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c30a2f4827901       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   4a016392435c3       storage-provisioner
	55c405c5db907       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7e040a2a27101       busybox
	8ce12798e302b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   ed8798074e17f       coredns-5dd5756b68-xqqds
	f153fe3d844da       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   e452c97803865       kube-proxy-qpxcp
	0db38a5fe1838       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   4a016392435c3       storage-provisioner
	811f83f4d25b2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   8285ae76ca75f       etcd-default-k8s-diff-port-968261
	c935f4cc994f0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   c930f5da151e5       kube-scheduler-default-k8s-diff-port-968261
	0f0b6de5c1ff3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   3855b999baad2       kube-controller-manager-default-k8s-diff-port-968261
	bd3188fde807f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   585df127d2340       kube-apiserver-default-k8s-diff-port-968261
	
	
	==> coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42078 - 3545 "HINFO IN 1257396824100369806.8679284982953496510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012063077s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-968261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-968261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=default-k8s-diff-port-968261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_07_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-968261
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:28:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:25:28 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:25:28 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:25:28 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:25:28 +0000   Fri, 08 Mar 2024 04:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.32
	  Hostname:    default-k8s-diff-port-968261
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 04a728da5b74434b8ff9a35ed8832efa
	  System UUID:                04a728da-5b74-434b-8ff9-a35ed8832efa
	  Boot ID:                    5fb53ae5-a4d4-41f2-af99-b9423669fb04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-xqqds                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-968261                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-968261             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-968261    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-qpxcp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-968261             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-ljb42                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeReady
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-968261 event: Registered Node default-k8s-diff-port-968261 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-968261 event: Registered Node default-k8s-diff-port-968261 in Controller
	
	
	==> dmesg <==
	[Mar 8 04:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052790] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664603] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.441245] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.736359] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.721100] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.060226] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077478] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.226596] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.134423] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.300437] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.885782] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.072919] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.091429] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.598463] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.570151] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +3.178214] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.265605] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] <==
	{"level":"info","ts":"2024-03-08T04:14:41.257192Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.32:2380"}
	{"level":"info","ts":"2024-03-08T04:14:41.257131Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:14:41.262057Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4611435f95b8c9ae","initial-advertise-peer-urls":["https://192.168.61.32:2380"],"listen-peer-urls":["https://192.168.61.32:2380"],"advertise-client-urls":["https://192.168.61.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:14:41.262832Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.32:2380"}
	{"level":"info","ts":"2024-03-08T04:14:41.262971Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:14:41.263509Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1806ce46318d79e6","local-member-id":"4611435f95b8c9ae","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:14:41.264469Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:14:43.056041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T04:14:43.05615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:14:43.056202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae received MsgPreVoteResp from 4611435f95b8c9ae at term 2"}
	{"level":"info","ts":"2024-03-08T04:14:43.056244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.056268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae received MsgVoteResp from 4611435f95b8c9ae at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.056295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae became leader at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.05632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4611435f95b8c9ae elected leader 4611435f95b8c9ae at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.063125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4611435f95b8c9ae","local-member-attributes":"{Name:default-k8s-diff-port-968261 ClientURLs:[https://192.168.61.32:2379]}","request-path":"/0/members/4611435f95b8c9ae/attributes","cluster-id":"1806ce46318d79e6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:14:43.063198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:14:43.063415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:14:43.063456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:14:43.063474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:14:43.064581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.32:2379"}
	{"level":"info","ts":"2024-03-08T04:14:43.064587Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:15:33.846127Z","caller":"traceutil/trace.go:171","msg":"trace[1060262001] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"178.072709ms","start":"2024-03-08T04:15:33.668011Z","end":"2024-03-08T04:15:33.846083Z","steps":["trace[1060262001] 'process raft request'  (duration: 101.909663ms)","trace[1060262001] 'compare'  (duration: 76.071746ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T04:24:43.090081Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":795}
	{"level":"info","ts":"2024-03-08T04:24:43.094167Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":795,"took":"3.211562ms","hash":1511359907}
	{"level":"info","ts":"2024-03-08T04:24:43.094308Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1511359907,"revision":795,"compact-revision":-1}
	
	
	==> kernel <==
	 04:28:13 up 14 min,  0 users,  load average: 0.34, 0.29, 0.16
	Linux default-k8s-diff-port-968261 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] <==
	I0308 04:24:44.490561       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:24:45.491258       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:24:45.491285       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:24:45.491292       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:24:45.491368       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:24:45.491433       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:24:45.492609       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:25:44.424483       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:25:45.491736       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:45.491945       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:25:45.492019       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:25:45.492891       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:45.492984       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:25:45.493053       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:26:44.425127       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 04:27:44.425127       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:27:45.492236       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:27:45.492387       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:27:45.492420       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:27:45.493553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:27:45.493625       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:27:45.493633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] <==
	I0308 04:22:27.706280       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:22:57.244119       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:22:57.715286       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:23:27.250732       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:23:27.727939       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:23:57.258691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:23:57.737926       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:24:27.265163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:24:27.746722       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:24:57.271088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:24:57.758901       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:25:27.286294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:27.767648       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:25:52.888669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="315.885µs"
	E0308 04:25:57.291513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:57.779226       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:26:03.888652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="204.673µs"
	E0308 04:26:27.297324       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:27.787952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:26:57.303743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:57.796096       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:27:27.312262       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:27.804112       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:27:57.318398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:57.812969       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] <==
	I0308 04:14:45.422184       1 server_others.go:69] "Using iptables proxy"
	I0308 04:14:45.444391       1 node.go:141] Successfully retrieved node IP: 192.168.61.32
	I0308 04:14:45.545119       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:14:45.545170       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:14:45.548035       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:14:45.548097       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:14:45.548230       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:14:45.548263       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:14:45.549447       1 config.go:188] "Starting service config controller"
	I0308 04:14:45.549494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:14:45.549514       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:14:45.549517       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:14:45.549956       1 config.go:315] "Starting node config controller"
	I0308 04:14:45.549992       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:14:45.650494       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:14:45.650546       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:14:45.650568       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] <==
	I0308 04:14:42.049093       1 serving.go:348] Generated self-signed cert in-memory
	I0308 04:14:44.551723       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:14:44.551934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:14:44.559972       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:14:44.566085       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0308 04:14:44.566121       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0308 04:14:44.566147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:14:44.572574       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0308 04:14:44.577905       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0308 04:14:44.573392       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:14:44.577936       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:14:44.666950       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0308 04:14:44.678437       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:14:44.678535       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:25:39 default-k8s-diff-port-968261 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:25:39 default-k8s-diff-port-968261 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:25:39 default-k8s-diff-port-968261 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:25:39 default-k8s-diff-port-968261 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:25:52 default-k8s-diff-port-968261 kubelet[910]: E0308 04:25:52.872051     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:26:03 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:03.873001     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:26:17 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:17.872090     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:26:30 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:30.872956     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:26:39 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:39.892221     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:26:39 default-k8s-diff-port-968261 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:26:39 default-k8s-diff-port-968261 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:26:39 default-k8s-diff-port-968261 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:26:39 default-k8s-diff-port-968261 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:26:43 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:43.872517     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:26:58 default-k8s-diff-port-968261 kubelet[910]: E0308 04:26:58.871930     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:27:12 default-k8s-diff-port-968261 kubelet[910]: E0308 04:27:12.872054     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:27:27 default-k8s-diff-port-968261 kubelet[910]: E0308 04:27:27.872879     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:27:39 default-k8s-diff-port-968261 kubelet[910]: E0308 04:27:39.891974     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:27:39 default-k8s-diff-port-968261 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:27:39 default-k8s-diff-port-968261 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:27:39 default-k8s-diff-port-968261 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:27:39 default-k8s-diff-port-968261 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:27:41 default-k8s-diff-port-968261 kubelet[910]: E0308 04:27:41.876330     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:27:53 default-k8s-diff-port-968261 kubelet[910]: E0308 04:27:53.872433     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:28:06 default-k8s-diff-port-968261 kubelet[910]: E0308 04:28:06.872510     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	
	
	==> storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] <==
	I0308 04:14:45.354479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0308 04:15:15.360536       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] <==
	I0308 04:15:16.236670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:15:16.249201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:15:16.249284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:15:33.659038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:15:33.659228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25!
	I0308 04:15:33.659331       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"099f6927-da18-43cc-af2d-4f1a3cfff472", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25 became leader
	I0308 04:15:33.759915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ljb42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42: exit status 1 (68.991296ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ljb42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-416634 -n embed-certs-416634
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:28:34.998527252 +0000 UTC m=+5568.037436294
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-416634 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-416634 logs -n 25: (2.09583652s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.440634644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872116440611018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=786c42d2-4870-4eaa-8f12-e91eb66a3411 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.441399037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0a1d43d-4679-4131-8b64-c43adce3b5ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.441454683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0a1d43d-4679-4131-8b64-c43adce3b5ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.441651363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0a1d43d-4679-4131-8b64-c43adce3b5ea name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.486825103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3861bd8-0d68-4e3e-a4e1-9ea06595f71a name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.486920427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3861bd8-0d68-4e3e-a4e1-9ea06595f71a name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.488058129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e1b6103-fc2f-44c3-9787-39d70c69685b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.488544095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872116488522766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e1b6103-fc2f-44c3-9787-39d70c69685b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.489079230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50568e35-5585-417d-8f54-364230d91798 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.489162545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50568e35-5585-417d-8f54-364230d91798 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.489421790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50568e35-5585-417d-8f54-364230d91798 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.527566092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=161fe170-9963-4081-be5a-547f20373efd name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.527623827Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=161fe170-9963-4081-be5a-547f20373efd name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.529101417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa2d23a6-1813-45b5-aaea-3a9d080ac64c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.529670222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872116529645444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa2d23a6-1813-45b5-aaea-3a9d080ac64c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.530249781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=111678cb-197f-4415-9a5a-a674e7ef2198 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.530371915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=111678cb-197f-4415-9a5a-a674e7ef2198 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.530553834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=111678cb-197f-4415-9a5a-a674e7ef2198 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.569414480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d1d6051-e160-4dfb-8a62-d01dac0630d0 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.569523109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d1d6051-e160-4dfb-8a62-d01dac0630d0 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.571181019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5910dbe-eca0-477d-a868-9e2e7e61c165 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.571721106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872116571693852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5910dbe-eca0-477d-a868-9e2e7e61c165 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.572729220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcd99117-a114-4d12-a524-c7912412fe47 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.572834121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcd99117-a114-4d12-a524-c7912412fe47 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:28:36 embed-certs-416634 crio[696]: time="2024-03-08 04:28:36.573089332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcd99117-a114-4d12-a524-c7912412fe47 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	069e0e7141e5c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   72cb54c01e4b8       kube-proxy-vc6p9
	22cf1eb102eca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   d5ef238b507a9       coredns-5dd5756b68-t8z94
	58a3351a84ed3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   0ef7e29efb1fc       storage-provisioner
	700108aa484b7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   a49b661206f86       coredns-5dd5756b68-h7p5l
	a19746274b80b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   db04f4bffeb9f       kube-scheduler-embed-certs-416634
	8b723d6ce5e40       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   c5e1758c71ac9       etcd-embed-certs-416634
	914f5c7bd0bf4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   2850b1ddd7fe2       kube-controller-manager-embed-certs-416634
	3796ec2c42925       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   2329d7c360fee       kube-apiserver-embed-certs-416634
	
	
	==> coredns [22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-416634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-416634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=embed-certs-416634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:19:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-416634
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:28:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:24:44 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:24:44 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:24:44 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:24:44 +0000   Fri, 08 Mar 2024 04:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.137
	  Hostname:    embed-certs-416634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d07fdaff76b0452ea252cb050c19ef00
	  System UUID:                d07fdaff-76b0-452e-a252-cb050c19ef00
	  Boot ID:                    d48cc684-c130-4fc6-94f4-ef7b78e4b404
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-h7p5l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-5dd5756b68-t8z94                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-embed-certs-416634                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-416634             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-416634    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-vc6p9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-embed-certs-416634             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-kh9vr               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-416634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-416634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-416634 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m6s   node-controller  Node embed-certs-416634 event: Registered Node embed-certs-416634 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054483] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.553813] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar 8 04:14] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.729338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.485164] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.056355] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066121] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.191730] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.137487] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.308272] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +5.649360] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +0.062892] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.974997] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.629923] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.799238] kauditd_printk_skb: 74 callbacks suppressed
	[Mar 8 04:19] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	[  +4.739572] kauditd_printk_skb: 59 callbacks suppressed
	[  +2.567409] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
	[ +12.369009] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[  +0.112620] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 8 04:20] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e] <==
	{"level":"info","ts":"2024-03-08T04:19:12.926964Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:19:12.927161Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"53f1e4b6b2bc3c92","initial-advertise-peer-urls":["https://192.168.50.137:2380"],"listen-peer-urls":["https://192.168.50.137:2380"],"advertise-client-urls":["https://192.168.50.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:19:12.927218Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:19:12.927282Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-08T04:19:12.936071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 switched to configuration voters=(6048867247869148306)"}
	{"level":"info","ts":"2024-03-08T04:19:12.936285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","added-peer-id":"53f1e4b6b2bc3c92","added-peer-peer-urls":["https://192.168.50.137:2380"]}
	{"level":"info","ts":"2024-03-08T04:19:12.938526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-08T04:19:12.978418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgPreVoteResp from 53f1e4b6b2bc3c92 at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.978555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgVoteResp from 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.978563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.97857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 53f1e4b6b2bc3c92 elected leader 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.982655Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:12.987491Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"53f1e4b6b2bc3c92","local-member-attributes":"{Name:embed-certs-416634 ClientURLs:[https://192.168.50.137:2379]}","request-path":"/0/members/53f1e4b6b2bc3c92/attributes","cluster-id":"7ac1a4431768b343","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:19:12.987984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:19:13.004681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.137:2379"}
	{"level":"info","ts":"2024-03-08T04:19:13.008742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:19:13.00953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011762Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011813Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:19:13.011827Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:19:13.013911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:28:37 up 14 min,  0 users,  load average: 0.36, 0.35, 0.28
	Linux embed-certs-416634 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409] <==
	W0308 04:24:16.387249       1 handler_proxy.go:93] no RequestInfo found in the context
	W0308 04:24:16.387482       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:24:16.387653       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:24:16.387663       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0308 04:24:16.387553       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:24:16.389761       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:25:15.278238       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:25:16.388400       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:16.388544       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:25:16.388573       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:25:16.390469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:16.390609       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:25:16.390617       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:26:15.278011       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 04:27:15.278189       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:27:16.388714       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:27:16.388929       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:27:16.388967       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:27:16.391134       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:27:16.391215       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:27:16.391240       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:28:15.278715       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa] <==
	I0308 04:23:03.762115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="185.227µs"
	E0308 04:23:30.393213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:23:30.921731       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:24:00.400665       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:24:00.930518       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:24:30.406418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:24:30.944007       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:25:00.412666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:00.954123       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:25:30.418182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:30.962775       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:25:38.759717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="369.014µs"
	I0308 04:25:50.754685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="134.859µs"
	E0308 04:26:00.427577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:00.971187       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:26:30.434483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:30.981278       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:27:00.440642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:00.990427       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:27:30.447198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:31.000482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:28:00.452131       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:28:01.009213       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:28:30.457966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:28:31.019466       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa] <==
	I0308 04:19:33.732454       1 server_others.go:69] "Using iptables proxy"
	I0308 04:19:33.750392       1 node.go:141] Successfully retrieved node IP: 192.168.50.137
	I0308 04:19:33.801946       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:19:33.801999       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:19:33.805996       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:19:33.806976       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:19:33.807458       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:19:33.807504       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:19:33.809147       1 config.go:188] "Starting service config controller"
	I0308 04:19:33.809621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:19:33.809700       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:19:33.809738       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:19:33.810776       1 config.go:315] "Starting node config controller"
	I0308 04:19:33.810853       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:19:33.912553       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:19:33.912783       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:19:33.912937       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f] <==
	W0308 04:19:15.451618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 04:19:15.451266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:15.456775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:19:15.456785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:19:15.457024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:15.457041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 04:19:15.457164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 04:19:15.457759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.283818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:16.284515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.405943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 04:19:16.405997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 04:19:16.438496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 04:19:16.438793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 04:19:16.451813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:16.452001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.461414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:19:16.461478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 04:19:16.503290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 04:19:16.503481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 04:19:16.581094       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 04:19:16.581409       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:19:16.640289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 04:19:16.640420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0308 04:19:19.020518       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:26:18 embed-certs-416634 kubelet[3707]: E0308 04:26:18.848794    3707 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:26:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:26:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:26:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:26:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:26:32 embed-certs-416634 kubelet[3707]: E0308 04:26:32.738774    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:26:47 embed-certs-416634 kubelet[3707]: E0308 04:26:47.737578    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:26:58 embed-certs-416634 kubelet[3707]: E0308 04:26:58.740120    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:27:13 embed-certs-416634 kubelet[3707]: E0308 04:27:13.738213    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:27:18 embed-certs-416634 kubelet[3707]: E0308 04:27:18.847627    3707 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:27:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:27:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:27:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:27:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:27:25 embed-certs-416634 kubelet[3707]: E0308 04:27:25.738463    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:27:38 embed-certs-416634 kubelet[3707]: E0308 04:27:38.737521    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:27:51 embed-certs-416634 kubelet[3707]: E0308 04:27:51.738023    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:28:06 embed-certs-416634 kubelet[3707]: E0308 04:28:06.738053    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:28:18 embed-certs-416634 kubelet[3707]: E0308 04:28:18.849773    3707 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:28:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:28:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:28:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:28:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:28:19 embed-certs-416634 kubelet[3707]: E0308 04:28:19.737794    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:28:34 embed-certs-416634 kubelet[3707]: E0308 04:28:34.739387    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	
	
	==> storage-provisioner [58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f] <==
	I0308 04:19:33.360293       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:19:33.457461       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:19:33.457545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:19:33.521485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:19:33.521708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a!
	I0308 04:19:33.522949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f51a459-45c6-4ffa-b48e-0e7a8212c146", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a became leader
	I0308 04:19:33.623954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-416634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kh9vr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr: exit status 1 (66.29331ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kh9vr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0308 04:22:52.008431  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477676 -n no-preload-477676
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:29:56.309837987 +0000 UTC m=+5649.348747022
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-477676 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-477676 logs -n 25: (2.057572372s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.754125012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872197754105794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39a93aa3-e122-4d90-80bb-456e9050ef65 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.754783448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=deb8d10c-3b1a-4a71-9388-36638d018bad name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.754892929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=deb8d10c-3b1a-4a71-9388-36638d018bad name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.755087245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=deb8d10c-3b1a-4a71-9388-36638d018bad name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.800069288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fc51834-0ec6-4a72-85dd-fa0ee6a0fe58 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.800166379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fc51834-0ec6-4a72-85dd-fa0ee6a0fe58 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.801459859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab8bfb4a-599e-473f-af06-b4c4e3ef6dae name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.801927563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872197801905379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab8bfb4a-599e-473f-af06-b4c4e3ef6dae name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.802611959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9788920-9292-493f-a207-98ee5bfb3b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.802763027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9788920-9292-493f-a207-98ee5bfb3b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.803012377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9788920-9292-493f-a207-98ee5bfb3b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.850087900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cbc6d2a-8311-4519-8d20-8d5742e987c7 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.850184571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cbc6d2a-8311-4519-8d20-8d5742e987c7 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.851517916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c3db442-cffe-418b-976c-18719155b936 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.852021457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872197851996034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c3db442-cffe-418b-976c-18719155b936 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.853212019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4485a26b-8c21-45c3-8b92-7bde20a9e90a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.853261806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4485a26b-8c21-45c3-8b92-7bde20a9e90a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.853437967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4485a26b-8c21-45c3-8b92-7bde20a9e90a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.892475993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e25f0eb5-75b6-4973-a7a6-dad3e38017e6 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.892564922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e25f0eb5-75b6-4973-a7a6-dad3e38017e6 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.894523296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b21a9716-584e-41f8-ac34-c122daa6d171 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.894922813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872197894902503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b21a9716-584e-41f8-ac34-c122daa6d171 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.895526886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c4450ba-8c08-4f81-904c-cd9cb7bf4b08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.895607843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c4450ba-8c08-4f81-904c-cd9cb7bf4b08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:29:57 no-preload-477676 crio[693]: time="2024-03-08 04:29:57.895777661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c4450ba-8c08-4f81-904c-cd9cb7bf4b08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9cdfabb3cefbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f610f2004d327       storage-provisioner
	d6369b2ee70d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0e327ddee7d06       coredns-76f75df574-kj6pn
	415c28097fb28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6a15b4ce6825e       coredns-76f75df574-hc8hb
	b2345362ee614       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   33e7763cddb89       kube-proxy-hr99w
	e301dc16dd6a1       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   1ecd4469af9c6       kube-apiserver-no-preload-477676
	c4be4bd9dfeb7       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   d27d66099466c       kube-controller-manager-no-preload-477676
	80e16eaa474ea       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   a4d40053267ff       kube-scheduler-no-preload-477676
	f32a6376e7a62       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   e2a3319dbe680       etcd-no-preload-477676
	
	
	==> coredns [415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-477676
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-477676
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=no-preload-477676
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:20:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-477676
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:29:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:26:03 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:26:03 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:26:03 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:26:03 +0000   Fri, 08 Mar 2024 04:20:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.214
	  Hostname:    no-preload-477676
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ee474f5d38b412f97d44586a1c6295d
	  System UUID:                0ee474f5-d38b-412f-97d4-4586a1c6295d
	  Boot ID:                    5a090d92-5599-4ca0-8e46-294782b3c871
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hc8hb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-76f75df574-kj6pn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-477676                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-477676             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-477676    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-hr99w                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-477676             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-756mf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node no-preload-477676 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node no-preload-477676 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node no-preload-477676 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-477676 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-477676 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-477676 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-477676 event: Registered Node no-preload-477676 in Controller
	
	
	==> dmesg <==
	[  +0.055608] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047285] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar 8 04:15] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.621469] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.750428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.474086] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.060671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071882] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.219808] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.146954] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.258947] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[ +17.368653] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.074529] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.475552] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +4.603960] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.845110] kauditd_printk_skb: 74 callbacks suppressed
	[Mar 8 04:20] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.403400] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +4.619184] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.669708] systemd-fstab-generator[4172]: Ignoring "noauto" option for root device
	[ +13.866624] systemd-fstab-generator[4385]: Ignoring "noauto" option for root device
	[  +0.069194] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 8 04:21] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734] <==
	{"level":"info","ts":"2024-03-08T04:20:32.251021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 switched to configuration voters=(12539050793187564691)"}
	{"level":"info","ts":"2024-03-08T04:20:32.260546Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T04:20:32.260551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4d2f25243ca737f5","local-member-id":"ae03a842fc865c93","added-peer-id":"ae03a842fc865c93","added-peer-peer-urls":["https://192.168.72.214:2380"]}
	{"level":"info","ts":"2024-03-08T04:20:32.260725Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ae03a842fc865c93","initial-advertise-peer-urls":["https://192.168.72.214:2380"],"listen-peer-urls":["https://192.168.72.214:2380"],"advertise-client-urls":["https://192.168.72.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:20:32.260739Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.214:2380"}
	{"level":"info","ts":"2024-03-08T04:20:32.260922Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.214:2380"}
	{"level":"info","ts":"2024-03-08T04:20:32.260975Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:20:33.173093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.174908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.175063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 received MsgPreVoteResp from ae03a842fc865c93 at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.175191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 received MsgVoteResp from ae03a842fc865c93 at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ae03a842fc865c93 elected leader ae03a842fc865c93 at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.180191Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ae03a842fc865c93","local-member-attributes":"{Name:no-preload-477676 ClientURLs:[https://192.168.72.214:2379]}","request-path":"/0/members/ae03a842fc865c93/attributes","cluster-id":"4d2f25243ca737f5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:20:33.182897Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.183069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:20:33.183524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:20:33.187538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:20:33.189688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.214:2379"}
	{"level":"info","ts":"2024-03-08T04:20:33.189966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:20:33.190007Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:20:33.190057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4d2f25243ca737f5","local-member-id":"ae03a842fc865c93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.190181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.190243Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 04:29:58 up 14 min,  0 users,  load average: 0.15, 0.20, 0.17
	Linux no-preload-477676 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412] <==
	I0308 04:23:54.402903       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:25:34.795582       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:34.795730       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0308 04:25:35.796978       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:35.797052       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:25:35.797063       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:25:35.797292       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:25:35.797413       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:25:35.798446       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:26:35.797904       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:26:35.797996       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:26:35.798011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:26:35.799134       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:26:35.799233       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:26:35.799281       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:28:35.798786       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:28:35.799280       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:28:35.799305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:28:35.799392       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:28:35.799429       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:28:35.801499       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1] <==
	I0308 04:24:23.340542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.935µs"
	E0308 04:24:50.404331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:24:50.945967       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:25:20.410935       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:20.954421       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:25:50.417537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:25:50.963568       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:26:20.423267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:20.971978       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:26:50.429914       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:26:50.980709       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:26:56.345489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="265.736µs"
	I0308 04:27:11.339462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="149.776µs"
	E0308 04:27:20.436304       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:20.991061       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:27:50.442776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:27:51.000198       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:28:20.448167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:28:21.007621       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:28:50.455440       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:28:51.018307       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:29:20.461662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:29:21.027724       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:29:50.469732       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:29:51.036789       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845] <==
	I0308 04:20:52.167156       1 server_others.go:72] "Using iptables proxy"
	I0308 04:20:52.188198       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.214"]
	I0308 04:20:52.284557       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0308 04:20:52.284608       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:20:52.284622       1 server_others.go:168] "Using iptables Proxier"
	I0308 04:20:52.299350       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:20:52.299611       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0308 04:20:52.299651       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:20:52.302722       1 config.go:188] "Starting service config controller"
	I0308 04:20:52.302768       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:20:52.302785       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:20:52.302789       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:20:52.303990       1 config.go:315] "Starting node config controller"
	I0308 04:20:52.304025       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:20:52.402986       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 04:20:52.403066       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:20:52.404059       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486] <==
	W0308 04:20:34.845230       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 04:20:34.845355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 04:20:34.845389       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:20:34.845511       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 04:20:34.845934       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:20:34.846078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 04:20:35.643949       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 04:20:35.644029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 04:20:35.673037       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 04:20:35.673093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 04:20:35.700806       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 04:20:35.701050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 04:20:35.741028       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 04:20:35.741255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 04:20:35.806295       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 04:20:35.806421       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:20:35.862644       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 04:20:35.862697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 04:20:35.966953       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 04:20:35.967101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 04:20:35.968414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:20:35.968564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 04:20:36.027325       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:20:36.027382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0308 04:20:38.720924       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:27:38 no-preload-477676 kubelet[4179]: E0308 04:27:38.361699    4179 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:27:38 no-preload-477676 kubelet[4179]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:27:38 no-preload-477676 kubelet[4179]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:27:38 no-preload-477676 kubelet[4179]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:27:38 no-preload-477676 kubelet[4179]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:27:49 no-preload-477676 kubelet[4179]: E0308 04:27:49.324423    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:28:02 no-preload-477676 kubelet[4179]: E0308 04:28:02.324440    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:28:17 no-preload-477676 kubelet[4179]: E0308 04:28:17.324191    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:28:29 no-preload-477676 kubelet[4179]: E0308 04:28:29.323490    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:28:38 no-preload-477676 kubelet[4179]: E0308 04:28:38.364263    4179 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:28:38 no-preload-477676 kubelet[4179]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:28:38 no-preload-477676 kubelet[4179]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:28:38 no-preload-477676 kubelet[4179]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:28:38 no-preload-477676 kubelet[4179]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:28:41 no-preload-477676 kubelet[4179]: E0308 04:28:41.324785    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:28:56 no-preload-477676 kubelet[4179]: E0308 04:28:56.325123    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:29:09 no-preload-477676 kubelet[4179]: E0308 04:29:09.323563    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:29:24 no-preload-477676 kubelet[4179]: E0308 04:29:24.325605    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:29:37 no-preload-477676 kubelet[4179]: E0308 04:29:37.325331    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:29:38 no-preload-477676 kubelet[4179]: E0308 04:29:38.359399    4179 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:29:38 no-preload-477676 kubelet[4179]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:29:38 no-preload-477676 kubelet[4179]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:29:38 no-preload-477676 kubelet[4179]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:29:38 no-preload-477676 kubelet[4179]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:29:52 no-preload-477676 kubelet[4179]: E0308 04:29:52.323883    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	
	
	==> storage-provisioner [9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759] <==
	I0308 04:20:53.970999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:20:54.005946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:20:54.006015       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:20:54.035778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d7ae5d8-3d11-424b-913d-7f8abac3e49d", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f became leader
	I0308 04:20:54.032409       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:20:54.036331       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f!
	I0308 04:20:54.137182       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-477676 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-756mf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf: exit status 1 (70.766786ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-756mf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:23:32.256694  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:27:52.008641  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:28:32.256236  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:31:35.306600  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (254.290742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-496808" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (251.063294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25: (1.551619205s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.207333254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872329207300266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9539184a-48b5-4889-89c5-ae3c12a4386c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.207947696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5488ba5-e0ad-481b-a6e1-8cadd876e51d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.207994651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5488ba5-e0ad-481b-a6e1-8cadd876e51d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.208037075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c5488ba5-e0ad-481b-a6e1-8cadd876e51d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.243555892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ca9f556-7cc8-493a-9a62-ec12775a9204 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.243626470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ca9f556-7cc8-493a-9a62-ec12775a9204 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.245181106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d76219d8-76e6-47c1-aca7-fa1ef3b1491b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.245681087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872329245652384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d76219d8-76e6-47c1-aca7-fa1ef3b1491b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.246204198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c47e3b43-1038-4bd5-ac50-4e9c0db231be name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.246275459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c47e3b43-1038-4bd5-ac50-4e9c0db231be name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.246326937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c47e3b43-1038-4bd5-ac50-4e9c0db231be name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.281652491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e8a3682-97f9-42e7-8362-2763521f8e21 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.281720542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e8a3682-97f9-42e7-8362-2763521f8e21 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.283160983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afe4b5c0-1758-406b-bbe8-199828a15d64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.283625607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872329283599511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afe4b5c0-1758-406b-bbe8-199828a15d64 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.284221696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a60fdd70-95e4-4726-92ff-36e888003be3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.284272527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a60fdd70-95e4-4726-92ff-36e888003be3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.284307024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a60fdd70-95e4-4726-92ff-36e888003be3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.321175425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cfd5ad4-d386-4c9b-91fb-5fe60923de5f name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.321242174Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cfd5ad4-d386-4c9b-91fb-5fe60923de5f name=/runtime.v1.RuntimeService/Version
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.323801420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cfe3314-cfe2-47c3-9fba-f9ad5cfeec3c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.324268075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872329324246729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cfe3314-cfe2-47c3-9fba-f9ad5cfeec3c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.325277675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aeb65a3-9e73-4bf7-90d3-c32b7b012b29 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.325325334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aeb65a3-9e73-4bf7-90d3-c32b7b012b29 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:32:09 old-k8s-version-496808 crio[646]: time="2024-03-08 04:32:09.325367136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3aeb65a3-9e73-4bf7-90d3-c32b7b012b29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar 8 04:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053945] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.875570] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.587428] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.467385] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.950443] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.070135] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073031] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.179936] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.161996] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.305208] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Mar 8 04:15] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.072099] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.055797] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.463903] kauditd_printk_skb: 46 callbacks suppressed
	[Mar 8 04:19] systemd-fstab-generator[5010]: Ignoring "noauto" option for root device
	[Mar 8 04:21] systemd-fstab-generator[5289]: Ignoring "noauto" option for root device
	[  +0.072080] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:32:09 up 17 min,  0 users,  load average: 0.00, 0.05, 0.07
	Linux old-k8s-version-496808 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b64c60, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000c1c000, 0x24, 0x0, ...)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: net.(*Dialer).DialContext(0xc0001cdf80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1c000, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000924260, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1c000, 0x24, 0x60, 0x7efd60421510, 0x118, ...)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: net/http.(*Transport).dial(0xc00024e780, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c1c000, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: net/http.(*Transport).dialConn(0xc00024e780, 0x4f7fe00, 0xc000120018, 0x0, 0xc000a47140, 0x5, 0xc000c1c000, 0x24, 0x0, 0xc000a3eb40, ...)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: net/http.(*Transport).dialConnFor(0xc00024e780, 0xc0009724d0)
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]: created by net/http.(*Transport).queueForDial
	Mar 08 04:32:06 old-k8s-version-496808 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 08 04:32:06 old-k8s-version-496808 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 08 04:32:06 old-k8s-version-496808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 08 04:32:07 old-k8s-version-496808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 08 04:32:07 old-k8s-version-496808 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 08 04:32:07 old-k8s-version-496808 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 08 04:32:07 old-k8s-version-496808 kubelet[6472]: I0308 04:32:07.239649    6472 server.go:416] Version: v1.20.0
	Mar 08 04:32:07 old-k8s-version-496808 kubelet[6472]: I0308 04:32:07.240015    6472 server.go:837] Client rotation is on, will bootstrap in background
	Mar 08 04:32:07 old-k8s-version-496808 kubelet[6472]: I0308 04:32:07.243264    6472 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 08 04:32:07 old-k8s-version-496808 kubelet[6472]: W0308 04:32:07.244827    6472 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 08 04:32:07 old-k8s-version-496808 kubelet[6472]: I0308 04:32:07.245245    6472 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (271.920223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-496808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (501.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:36:34.172523066 +0000 UTC m=+6047.211432106
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-968261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.155µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-968261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-968261 logs -n 25
E0308 04:36:35.578781  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-968261 logs -n 25: (1.602827794s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/nsswitch.conf                                   |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/hosts                                           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/resolv.conf                                     |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo crictl                           | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | pods                                                 |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo crictl ps                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | --all                                                |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo find                             | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/cni -type f -exec sh -c                         |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo ip a s                           | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	| ssh     | -p auto-678320 sudo ip r s                           | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	| ssh     | -p auto-678320 sudo                                  | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo iptables                         | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo journalctl                       | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo docker                           | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo cat                              | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo                                  | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC | 08 Mar 24 04:36 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-678320 sudo systemctl                        | auto-678320 | jenkins | v1.32.0 | 08 Mar 24 04:36 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:36:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:36:21.200250  967335 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:36:21.200520  967335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:36:21.200531  967335 out.go:304] Setting ErrFile to fd 2...
	I0308 04:36:21.200536  967335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:36:21.200789  967335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:36:21.201453  967335 out.go:298] Setting JSON to false
	I0308 04:36:21.202653  967335 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29907,"bootTime":1709842674,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:36:21.202723  967335 start.go:139] virtualization: kvm guest
	I0308 04:36:21.204922  967335 out.go:177] * [calico-678320] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:36:21.206160  967335 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:36:21.207298  967335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:36:21.206264  967335 notify.go:220] Checking for updates...
	I0308 04:36:21.208498  967335 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:36:21.209778  967335 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:36:21.210917  967335 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:36:21.211984  967335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:36:21.213597  967335 config.go:182] Loaded profile config "auto-678320": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:36:21.213801  967335 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:36:21.213929  967335 config.go:182] Loaded profile config "kindnet-678320": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:36:21.214051  967335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:36:21.257428  967335 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:36:21.258688  967335 start.go:297] selected driver: kvm2
	I0308 04:36:21.258707  967335 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:36:21.258718  967335 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:36:21.259567  967335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:36:21.259638  967335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:36:21.277173  967335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:36:21.277230  967335 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 04:36:21.277557  967335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:36:21.277645  967335 cni.go:84] Creating CNI manager for "calico"
	I0308 04:36:21.277664  967335 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0308 04:36:21.277729  967335 start.go:340] cluster config:
	{Name:calico-678320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-678320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:36:21.277865  967335 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:36:21.279360  967335 out.go:177] * Starting "calico-678320" primary control-plane node in "calico-678320" cluster
	I0308 04:36:21.280479  967335 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:36:21.280527  967335 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 04:36:21.280538  967335 cache.go:56] Caching tarball of preloaded images
	I0308 04:36:21.280642  967335 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:36:21.280658  967335 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 04:36:21.280749  967335 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/calico-678320/config.json ...
	I0308 04:36:21.280772  967335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/calico-678320/config.json: {Name:mk5f4f3e6261a56e42254fb9e839bbaaeab0fa6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:36:21.280914  967335 start.go:360] acquireMachinesLock for calico-678320: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:36:21.280944  967335 start.go:364] duration metric: took 16.706µs to acquireMachinesLock for "calico-678320"
	I0308 04:36:21.280961  967335 start.go:93] Provisioning new machine with config: &{Name:calico-678320 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:calico-678320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:36:21.281028  967335 start.go:125] createHost starting for "" (driver="kvm2")
	I0308 04:36:21.282448  967335 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0308 04:36:21.282619  967335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:36:21.282668  967335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:36:21.301308  967335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0308 04:36:21.301745  967335 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:36:21.302422  967335 main.go:141] libmachine: Using API Version  1
	I0308 04:36:21.302448  967335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:36:21.302824  967335 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:36:21.303038  967335 main.go:141] libmachine: (calico-678320) Calling .GetMachineName
	I0308 04:36:21.303205  967335 main.go:141] libmachine: (calico-678320) Calling .DriverName
	I0308 04:36:21.303353  967335 start.go:159] libmachine.API.Create for "calico-678320" (driver="kvm2")
	I0308 04:36:21.303389  967335 client.go:168] LocalClient.Create starting
	I0308 04:36:21.303426  967335 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 04:36:21.303477  967335 main.go:141] libmachine: Decoding PEM data...
	I0308 04:36:21.303507  967335 main.go:141] libmachine: Parsing certificate...
	I0308 04:36:21.303584  967335 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 04:36:21.303605  967335 main.go:141] libmachine: Decoding PEM data...
	I0308 04:36:21.303618  967335 main.go:141] libmachine: Parsing certificate...
	I0308 04:36:21.303636  967335 main.go:141] libmachine: Running pre-create checks...
	I0308 04:36:21.303650  967335 main.go:141] libmachine: (calico-678320) Calling .PreCreateCheck
	I0308 04:36:21.303983  967335 main.go:141] libmachine: (calico-678320) Calling .GetConfigRaw
	I0308 04:36:21.304401  967335 main.go:141] libmachine: Creating machine...
	I0308 04:36:21.304418  967335 main.go:141] libmachine: (calico-678320) Calling .Create
	I0308 04:36:21.304520  967335 main.go:141] libmachine: (calico-678320) Creating KVM machine...
	I0308 04:36:21.305913  967335 main.go:141] libmachine: (calico-678320) DBG | found existing default KVM network
	I0308 04:36:21.307568  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:21.307417  967368 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000272110}
	I0308 04:36:21.307598  967335 main.go:141] libmachine: (calico-678320) DBG | created network xml: 
	I0308 04:36:21.307612  967335 main.go:141] libmachine: (calico-678320) DBG | <network>
	I0308 04:36:21.307621  967335 main.go:141] libmachine: (calico-678320) DBG |   <name>mk-calico-678320</name>
	I0308 04:36:21.307634  967335 main.go:141] libmachine: (calico-678320) DBG |   <dns enable='no'/>
	I0308 04:36:21.307644  967335 main.go:141] libmachine: (calico-678320) DBG |   
	I0308 04:36:21.307655  967335 main.go:141] libmachine: (calico-678320) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 04:36:21.307665  967335 main.go:141] libmachine: (calico-678320) DBG |     <dhcp>
	I0308 04:36:21.307698  967335 main.go:141] libmachine: (calico-678320) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 04:36:21.307722  967335 main.go:141] libmachine: (calico-678320) DBG |     </dhcp>
	I0308 04:36:21.307759  967335 main.go:141] libmachine: (calico-678320) DBG |   </ip>
	I0308 04:36:21.307785  967335 main.go:141] libmachine: (calico-678320) DBG |   
	I0308 04:36:21.307820  967335 main.go:141] libmachine: (calico-678320) DBG | </network>
	I0308 04:36:21.307838  967335 main.go:141] libmachine: (calico-678320) DBG | 
	I0308 04:36:21.312353  967335 main.go:141] libmachine: (calico-678320) DBG | trying to create private KVM network mk-calico-678320 192.168.39.0/24...
	I0308 04:36:21.395387  967335 main.go:141] libmachine: (calico-678320) DBG | private KVM network mk-calico-678320 192.168.39.0/24 created
	I0308 04:36:21.395420  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:21.395324  967368 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:36:21.395443  967335 main.go:141] libmachine: (calico-678320) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320 ...
	I0308 04:36:21.395469  967335 main.go:141] libmachine: (calico-678320) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 04:36:21.395605  967335 main.go:141] libmachine: (calico-678320) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 04:36:21.658976  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:21.658809  967368 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320/id_rsa...
	I0308 04:36:21.778447  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:21.778304  967368 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320/calico-678320.rawdisk...
	I0308 04:36:21.778480  967335 main.go:141] libmachine: (calico-678320) DBG | Writing magic tar header
	I0308 04:36:21.778495  967335 main.go:141] libmachine: (calico-678320) DBG | Writing SSH key tar header
	I0308 04:36:21.778512  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:21.778433  967368 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320 ...
	I0308 04:36:21.778551  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320
	I0308 04:36:21.778662  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320 (perms=drwx------)
	I0308 04:36:21.778681  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 04:36:21.778689  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 04:36:21.778698  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 04:36:21.778705  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 04:36:21.778716  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 04:36:21.778729  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:36:21.778739  967335 main.go:141] libmachine: (calico-678320) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 04:36:21.778754  967335 main.go:141] libmachine: (calico-678320) Creating domain...
	I0308 04:36:21.778765  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 04:36:21.778781  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 04:36:21.778788  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home/jenkins
	I0308 04:36:21.778795  967335 main.go:141] libmachine: (calico-678320) DBG | Checking permissions on dir: /home
	I0308 04:36:21.778803  967335 main.go:141] libmachine: (calico-678320) DBG | Skipping /home - not owner
	I0308 04:36:21.779896  967335 main.go:141] libmachine: (calico-678320) define libvirt domain using xml: 
	I0308 04:36:21.779917  967335 main.go:141] libmachine: (calico-678320) <domain type='kvm'>
	I0308 04:36:21.779927  967335 main.go:141] libmachine: (calico-678320)   <name>calico-678320</name>
	I0308 04:36:21.779939  967335 main.go:141] libmachine: (calico-678320)   <memory unit='MiB'>3072</memory>
	I0308 04:36:21.779948  967335 main.go:141] libmachine: (calico-678320)   <vcpu>2</vcpu>
	I0308 04:36:21.779957  967335 main.go:141] libmachine: (calico-678320)   <features>
	I0308 04:36:21.779969  967335 main.go:141] libmachine: (calico-678320)     <acpi/>
	I0308 04:36:21.779980  967335 main.go:141] libmachine: (calico-678320)     <apic/>
	I0308 04:36:21.779991  967335 main.go:141] libmachine: (calico-678320)     <pae/>
	I0308 04:36:21.780002  967335 main.go:141] libmachine: (calico-678320)     
	I0308 04:36:21.780015  967335 main.go:141] libmachine: (calico-678320)   </features>
	I0308 04:36:21.780028  967335 main.go:141] libmachine: (calico-678320)   <cpu mode='host-passthrough'>
	I0308 04:36:21.780040  967335 main.go:141] libmachine: (calico-678320)   
	I0308 04:36:21.780047  967335 main.go:141] libmachine: (calico-678320)   </cpu>
	I0308 04:36:21.780057  967335 main.go:141] libmachine: (calico-678320)   <os>
	I0308 04:36:21.780068  967335 main.go:141] libmachine: (calico-678320)     <type>hvm</type>
	I0308 04:36:21.780081  967335 main.go:141] libmachine: (calico-678320)     <boot dev='cdrom'/>
	I0308 04:36:21.780100  967335 main.go:141] libmachine: (calico-678320)     <boot dev='hd'/>
	I0308 04:36:21.780112  967335 main.go:141] libmachine: (calico-678320)     <bootmenu enable='no'/>
	I0308 04:36:21.780148  967335 main.go:141] libmachine: (calico-678320)   </os>
	I0308 04:36:21.780177  967335 main.go:141] libmachine: (calico-678320)   <devices>
	I0308 04:36:21.780188  967335 main.go:141] libmachine: (calico-678320)     <disk type='file' device='cdrom'>
	I0308 04:36:21.780204  967335 main.go:141] libmachine: (calico-678320)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320/boot2docker.iso'/>
	I0308 04:36:21.780217  967335 main.go:141] libmachine: (calico-678320)       <target dev='hdc' bus='scsi'/>
	I0308 04:36:21.780227  967335 main.go:141] libmachine: (calico-678320)       <readonly/>
	I0308 04:36:21.780254  967335 main.go:141] libmachine: (calico-678320)     </disk>
	I0308 04:36:21.780270  967335 main.go:141] libmachine: (calico-678320)     <disk type='file' device='disk'>
	I0308 04:36:21.780304  967335 main.go:141] libmachine: (calico-678320)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 04:36:21.780319  967335 main.go:141] libmachine: (calico-678320)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/calico-678320/calico-678320.rawdisk'/>
	I0308 04:36:21.780332  967335 main.go:141] libmachine: (calico-678320)       <target dev='hda' bus='virtio'/>
	I0308 04:36:21.780346  967335 main.go:141] libmachine: (calico-678320)     </disk>
	I0308 04:36:21.780358  967335 main.go:141] libmachine: (calico-678320)     <interface type='network'>
	I0308 04:36:21.780375  967335 main.go:141] libmachine: (calico-678320)       <source network='mk-calico-678320'/>
	I0308 04:36:21.780389  967335 main.go:141] libmachine: (calico-678320)       <model type='virtio'/>
	I0308 04:36:21.780397  967335 main.go:141] libmachine: (calico-678320)     </interface>
	I0308 04:36:21.780412  967335 main.go:141] libmachine: (calico-678320)     <interface type='network'>
	I0308 04:36:21.780432  967335 main.go:141] libmachine: (calico-678320)       <source network='default'/>
	I0308 04:36:21.780444  967335 main.go:141] libmachine: (calico-678320)       <model type='virtio'/>
	I0308 04:36:21.780454  967335 main.go:141] libmachine: (calico-678320)     </interface>
	I0308 04:36:21.780486  967335 main.go:141] libmachine: (calico-678320)     <serial type='pty'>
	I0308 04:36:21.780501  967335 main.go:141] libmachine: (calico-678320)       <target port='0'/>
	I0308 04:36:21.780507  967335 main.go:141] libmachine: (calico-678320)     </serial>
	I0308 04:36:21.780517  967335 main.go:141] libmachine: (calico-678320)     <console type='pty'>
	I0308 04:36:21.780524  967335 main.go:141] libmachine: (calico-678320)       <target type='serial' port='0'/>
	I0308 04:36:21.780537  967335 main.go:141] libmachine: (calico-678320)     </console>
	I0308 04:36:21.780545  967335 main.go:141] libmachine: (calico-678320)     <rng model='virtio'>
	I0308 04:36:21.780552  967335 main.go:141] libmachine: (calico-678320)       <backend model='random'>/dev/random</backend>
	I0308 04:36:21.780560  967335 main.go:141] libmachine: (calico-678320)     </rng>
	I0308 04:36:21.780567  967335 main.go:141] libmachine: (calico-678320)     
	I0308 04:36:21.780571  967335 main.go:141] libmachine: (calico-678320)     
	I0308 04:36:21.780579  967335 main.go:141] libmachine: (calico-678320)   </devices>
	I0308 04:36:21.780584  967335 main.go:141] libmachine: (calico-678320) </domain>
	I0308 04:36:21.780590  967335 main.go:141] libmachine: (calico-678320) 
	I0308 04:36:21.784599  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:cd:b6:28 in network default
	I0308 04:36:21.785138  967335 main.go:141] libmachine: (calico-678320) Ensuring networks are active...
	I0308 04:36:21.785154  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:21.785802  967335 main.go:141] libmachine: (calico-678320) Ensuring network default is active
	I0308 04:36:21.786111  967335 main.go:141] libmachine: (calico-678320) Ensuring network mk-calico-678320 is active
	I0308 04:36:21.786585  967335 main.go:141] libmachine: (calico-678320) Getting domain xml...
	I0308 04:36:21.787353  967335 main.go:141] libmachine: (calico-678320) Creating domain...
	I0308 04:36:23.118093  967335 main.go:141] libmachine: (calico-678320) Waiting to get IP...
	I0308 04:36:23.120328  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:23.120907  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:23.120940  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:23.120846  967368 retry.go:31] will retry after 259.524535ms: waiting for machine to come up
	I0308 04:36:23.382429  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:23.382970  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:23.382998  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:23.382905  967368 retry.go:31] will retry after 337.999086ms: waiting for machine to come up
	I0308 04:36:23.722464  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:23.723064  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:23.723097  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:23.723011  967368 retry.go:31] will retry after 440.405697ms: waiting for machine to come up
	I0308 04:36:24.164905  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:24.165622  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:24.165653  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:24.165564  967368 retry.go:31] will retry after 511.300541ms: waiting for machine to come up
	I0308 04:36:24.678341  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:24.678862  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:24.678893  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:24.678808  967368 retry.go:31] will retry after 595.061756ms: waiting for machine to come up
	I0308 04:36:25.275449  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:25.275980  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:25.276010  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:25.275941  967368 retry.go:31] will retry after 660.365322ms: waiting for machine to come up
	I0308 04:36:25.938534  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:25.939098  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:25.939142  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:25.939044  967368 retry.go:31] will retry after 1.028975628s: waiting for machine to come up
	I0308 04:36:26.969535  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:26.970194  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:26.970224  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:26.970149  967368 retry.go:31] will retry after 1.330576775s: waiting for machine to come up
	I0308 04:36:28.302483  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:28.303016  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:28.303043  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:28.302959  967368 retry.go:31] will retry after 1.328249669s: waiting for machine to come up
	I0308 04:36:29.632498  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:29.632986  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:29.633013  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:29.632956  967368 retry.go:31] will retry after 1.544111052s: waiting for machine to come up
	I0308 04:36:31.178527  967335 main.go:141] libmachine: (calico-678320) DBG | domain calico-678320 has defined MAC address 52:54:00:5f:46:41 in network mk-calico-678320
	I0308 04:36:31.179112  967335 main.go:141] libmachine: (calico-678320) DBG | unable to find current IP address of domain calico-678320 in network mk-calico-678320
	I0308 04:36:31.179142  967335 main.go:141] libmachine: (calico-678320) DBG | I0308 04:36:31.179051  967368 retry.go:31] will retry after 2.678708372s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 08 04:36:34 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:34.998021298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872594997998134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec2e34d1-8797-4266-971f-baf504cf419f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:34 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:34.998707560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e118528-4118-444e-b285-2ca5940a2f67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:34 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:34.998871998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e118528-4118-444e-b285-2ca5940a2f67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:34 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:34.999080752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e118528-4118-444e-b285-2ca5940a2f67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.066651396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e537a30c-daec-4fc5-8b55-40407006b77e name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.066833104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e537a30c-daec-4fc5-8b55-40407006b77e name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.068371632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81c92895-bae5-4b8d-a53d-ee56bf3c8b84 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.069048517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872595069014333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81c92895-bae5-4b8d-a53d-ee56bf3c8b84 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.070051204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be02fcb7-cb3c-423d-8748-1994d0a33b6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.070170549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be02fcb7-cb3c-423d-8748-1994d0a33b6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.070525401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be02fcb7-cb3c-423d-8748-1994d0a33b6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.133193842Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5524fa7-bccc-417f-b3a1-09fcd5586fb8 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.133353015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5524fa7-bccc-417f-b3a1-09fcd5586fb8 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.136232288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fde2b53c-436f-47f7-bc41-1b05c7c50ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.136702114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872595136675778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fde2b53c-436f-47f7-bc41-1b05c7c50ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.137395831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=682112ab-7f72-42ff-852f-5dab0b71cf5b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.137516423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=682112ab-7f72-42ff-852f-5dab0b71cf5b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.137946918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=682112ab-7f72-42ff-852f-5dab0b71cf5b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.199994930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84571d23-223e-4888-a9ad-a33fdd52e5c5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.200131656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84571d23-223e-4888-a9ad-a33fdd52e5c5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.207623796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f303c09d-3c0b-4800-a793-927008dbc1de name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.208170222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872595208147211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f303c09d-3c0b-4800-a793-927008dbc1de name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.209891450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58536093-8202-476d-b542-28051b9be0a7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.210005549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58536093-8202-476d-b542-28051b9be0a7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:36:35 default-k8s-diff-port-968261 crio[690]: time="2024-03-08 04:36:35.210208110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871316102553862,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c405c5db907f4b0e271bf97bb0ffd76ca1fefbc096030a1aed5f4e67348317,PodSandboxId:7e040a2a27101ec4e1ecda9dfc6a14ee99f540d9b6895479b15a91d5c97776b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709871293807469441,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 285ff49b-6aad-46e0-b83e-1f5e7526dc8e,},Annotations:map[string]string{io.kubernetes.container.hash: f5cc11f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370,PodSandboxId:ed8798074e17f7e81e2e81dec6f68b45f595e5214317b534fb102d5bbf7b9b6c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871292800037463,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xqqds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 497e3ac1-3541-43bc-b138-1a47d7085161,},Annotations:map[string]string{io.kubernetes.container.hash: eb066e10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963,PodSandboxId:e452c978038656cfc7b70c00c0ec072da8e516a79969c4706b6430a354e74bf7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871285263551162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qpxcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ece55d5-ea
70-4be7-91c1-b1ac4fbf3def,},Annotations:map[string]string{io.kubernetes.container.hash: 580e3e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef,PodSandboxId:4a016392435c35938a8f9a0c6180cb9cffe5ed55085fb5a026606986e9d37ad8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709871285223103253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2af524-805e-4b03-b57d-
52e11b4c4344,},Annotations:map[string]string{io.kubernetes.container.hash: 32b612b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7,PodSandboxId:8285ae76ca75f8159bb56abe0ec25186c904057bc67ba22956b06086de1a72c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871280664144918,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ef4ffb142bb1b1a36cde477ee5eb2,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 66f65fb4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f,PodSandboxId:c930f5da151e516a5dd0e1d63d281a3d963a562d7794a50968449905c980ba14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871280590206377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126c4c950ddc2bdbc4332fd7a75ff39b,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6,PodSandboxId:3855b999baad207c092d964296e696a92f70af4d467fbaae1295ea2410dd648f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871280544395479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9594f3e9e7a9e0a04fc28f059d98
05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c,PodSandboxId:585df127d23405f172abb15bfc05736f766e5e9950750be1b00b80878895ff96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871280530135133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-968261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69920c8be13a5392621f56a25a5ab143
,},Annotations:map[string]string{io.kubernetes.container.hash: 1cf14b2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58536093-8202-476d-b542-28051b9be0a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c30a2f4827901       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   4a016392435c3       storage-provisioner
	55c405c5db907       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   7e040a2a27101       busybox
	8ce12798e302b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   ed8798074e17f       coredns-5dd5756b68-xqqds
	f153fe3d844da       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   e452c97803865       kube-proxy-qpxcp
	0db38a5fe1838       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   4a016392435c3       storage-provisioner
	811f83f4d25b2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   8285ae76ca75f       etcd-default-k8s-diff-port-968261
	c935f4cc994f0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   c930f5da151e5       kube-scheduler-default-k8s-diff-port-968261
	0f0b6de5c1ff3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   3855b999baad2       kube-controller-manager-default-k8s-diff-port-968261
	bd3188fde807f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   585df127d2340       kube-apiserver-default-k8s-diff-port-968261
	
	
	==> coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42078 - 3545 "HINFO IN 1257396824100369806.8679284982953496510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012063077s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-968261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-968261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=default-k8s-diff-port-968261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_07_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-968261
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:36:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:35:41 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:35:41 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:35:41 +0000   Fri, 08 Mar 2024 04:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:35:41 +0000   Fri, 08 Mar 2024 04:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.32
	  Hostname:    default-k8s-diff-port-968261
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 04a728da5b74434b8ff9a35ed8832efa
	  System UUID:                04a728da-5b74-434b-8ff9-a35ed8832efa
	  Boot ID:                    5fb53ae5-a4d4-41f2-af99-b9423669fb04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-5dd5756b68-xqqds                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-968261                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-968261             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-968261    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-qpxcp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-968261             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-ljb42                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-968261 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-968261 event: Registered Node default-k8s-diff-port-968261 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-968261 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-968261 event: Registered Node default-k8s-diff-port-968261 in Controller
	
	
	==> dmesg <==
	[Mar 8 04:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052790] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664603] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.441245] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.736359] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.721100] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.060226] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077478] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.226596] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.134423] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.300437] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.885782] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +0.072919] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.091429] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.598463] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.570151] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +3.178214] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.265605] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] <==
	{"level":"info","ts":"2024-03-08T04:14:43.056244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.056268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae received MsgVoteResp from 4611435f95b8c9ae at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.056295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4611435f95b8c9ae became leader at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.05632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4611435f95b8c9ae elected leader 4611435f95b8c9ae at term 3"}
	{"level":"info","ts":"2024-03-08T04:14:43.063125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4611435f95b8c9ae","local-member-attributes":"{Name:default-k8s-diff-port-968261 ClientURLs:[https://192.168.61.32:2379]}","request-path":"/0/members/4611435f95b8c9ae/attributes","cluster-id":"1806ce46318d79e6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:14:43.063198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:14:43.063415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:14:43.063456Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:14:43.063474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:14:43.064581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.32:2379"}
	{"level":"info","ts":"2024-03-08T04:14:43.064587Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:15:33.846127Z","caller":"traceutil/trace.go:171","msg":"trace[1060262001] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"178.072709ms","start":"2024-03-08T04:15:33.668011Z","end":"2024-03-08T04:15:33.846083Z","steps":["trace[1060262001] 'process raft request'  (duration: 101.909663ms)","trace[1060262001] 'compare'  (duration: 76.071746ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T04:24:43.090081Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":795}
	{"level":"info","ts":"2024-03-08T04:24:43.094167Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":795,"took":"3.211562ms","hash":1511359907}
	{"level":"info","ts":"2024-03-08T04:24:43.094308Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1511359907,"revision":795,"compact-revision":-1}
	{"level":"info","ts":"2024-03-08T04:29:43.098675Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1037}
	{"level":"info","ts":"2024-03-08T04:29:43.100498Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1037,"took":"1.364712ms","hash":3613362175}
	{"level":"info","ts":"2024-03-08T04:29:43.100566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3613362175,"revision":1037,"compact-revision":795}
	{"level":"info","ts":"2024-03-08T04:34:43.325388Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1280}
	{"level":"warn","ts":"2024-03-08T04:34:43.326432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.436126ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14532709299642441694 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:1280 > ","response":"size:5"}
	{"level":"info","ts":"2024-03-08T04:34:43.326636Z","caller":"traceutil/trace.go:171","msg":"trace[1095253250] compact","detail":"{revision:1280; response_revision:1523; }","duration":"201.982722ms","start":"2024-03-08T04:34:43.124619Z","end":"2024-03-08T04:34:43.326602Z","steps":["trace[1095253250] 'process raft request'  (duration: 65.680907ms)","trace[1095253250] 'check and update compact revision'  (duration: 134.301755ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T04:34:43.328188Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1280,"took":"2.119656ms","hash":2561148124}
	{"level":"info","ts":"2024-03-08T04:34:43.328271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2561148124,"revision":1280,"compact-revision":1037}
	{"level":"info","ts":"2024-03-08T04:35:41.506297Z","caller":"traceutil/trace.go:171","msg":"trace[570616241] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"113.773792ms","start":"2024-03-08T04:35:41.392408Z","end":"2024-03-08T04:35:41.506182Z","steps":["trace[570616241] 'process raft request'  (duration: 63.907738ms)","trace[570616241] 'compare'  (duration: 49.239832ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T04:36:05.982979Z","caller":"traceutil/trace.go:171","msg":"trace[1502696994] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"134.00185ms","start":"2024-03-08T04:36:05.848939Z","end":"2024-03-08T04:36:05.982941Z","steps":["trace[1502696994] 'process raft request'  (duration: 124.006467ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:36:35 up 22 min,  0 users,  load average: 0.32, 0.16, 0.11
	Linux default-k8s-diff-port-968261 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] <==
	I0308 04:32:45.500418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:32:45.501670       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:32:45.501695       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:32:45.501702       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:33:44.424095       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 04:34:44.424845       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:34:44.505011       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:44.505143       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:34:44.505579       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:34:45.505386       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:45.505510       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:34:45.505541       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:34:45.505449       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:45.505654       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:34:45.506621       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:35:44.425046       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:35:45.505868       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:35:45.505959       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:35:45.505969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:35:45.507166       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:35:45.507210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:35:45.507219       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] <==
	I0308 04:31:09.891991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="336.502µs"
	E0308 04:31:27.365020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:27.886251       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:31:57.372910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:57.894832       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:32:27.378590       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:27.904178       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:32:57.385022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:57.912840       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:27.391097       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:27.921463       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:57.397653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:57.929506       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:34:27.406168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:34:27.939725       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:34:57.411589       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:34:57.951471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:35:27.416295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:35:27.959563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:35:55.898818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="2.382253ms"
	E0308 04:35:57.421484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:35:57.967435       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:36:06.892108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="357.997µs"
	E0308 04:36:27.428105       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:36:27.977187       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] <==
	I0308 04:14:45.422184       1 server_others.go:69] "Using iptables proxy"
	I0308 04:14:45.444391       1 node.go:141] Successfully retrieved node IP: 192.168.61.32
	I0308 04:14:45.545119       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:14:45.545170       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:14:45.548035       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:14:45.548097       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:14:45.548230       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:14:45.548263       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:14:45.549447       1 config.go:188] "Starting service config controller"
	I0308 04:14:45.549494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:14:45.549514       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:14:45.549517       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:14:45.549956       1 config.go:315] "Starting node config controller"
	I0308 04:14:45.549992       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:14:45.650494       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:14:45.650546       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:14:45.650568       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] <==
	I0308 04:14:42.049093       1 serving.go:348] Generated self-signed cert in-memory
	I0308 04:14:44.551723       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 04:14:44.551934       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:14:44.559972       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 04:14:44.566085       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0308 04:14:44.566121       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0308 04:14:44.566147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 04:14:44.572574       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0308 04:14:44.577905       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0308 04:14:44.573392       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 04:14:44.577936       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:14:44.666950       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0308 04:14:44.678437       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 04:14:44.678535       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:34:10 default-k8s-diff-port-968261 kubelet[910]: E0308 04:34:10.872305     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:34:21 default-k8s-diff-port-968261 kubelet[910]: E0308 04:34:21.871747     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:34:36 default-k8s-diff-port-968261 kubelet[910]: E0308 04:34:36.871837     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:34:39 default-k8s-diff-port-968261 kubelet[910]: E0308 04:34:39.895578     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:34:39 default-k8s-diff-port-968261 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:34:39 default-k8s-diff-port-968261 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:34:39 default-k8s-diff-port-968261 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:34:39 default-k8s-diff-port-968261 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:34:47 default-k8s-diff-port-968261 kubelet[910]: E0308 04:34:47.872520     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:35:01 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:01.871515     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:35:14 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:14.872490     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:35:29 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:29.872550     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:35:39 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:39.895234     910 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:35:39 default-k8s-diff-port-968261 kubelet[910]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:35:39 default-k8s-diff-port-968261 kubelet[910]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:35:39 default-k8s-diff-port-968261 kubelet[910]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:35:39 default-k8s-diff-port-968261 kubelet[910]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:35:44 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:44.889058     910 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 08 04:35:44 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:44.889533     910 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 08 04:35:44 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:44.891274     910 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k88v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-ljb42_kube-system(94d8d406-0ea5-4ab7-86ef-e8284c83f810): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 08 04:35:44 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:44.891595     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:35:55 default-k8s-diff-port-968261 kubelet[910]: E0308 04:35:55.874105     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:36:06 default-k8s-diff-port-968261 kubelet[910]: E0308 04:36:06.873246     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:36:18 default-k8s-diff-port-968261 kubelet[910]: E0308 04:36:18.872398     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	Mar 08 04:36:32 default-k8s-diff-port-968261 kubelet[910]: E0308 04:36:32.872707     910 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ljb42" podUID="94d8d406-0ea5-4ab7-86ef-e8284c83f810"
	
	
	==> storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] <==
	I0308 04:14:45.354479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0308 04:15:15.360536       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] <==
	I0308 04:15:16.236670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:15:16.249201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:15:16.249284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:15:33.659038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:15:33.659228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25!
	I0308 04:15:33.659331       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"099f6927-da18-43cc-af2d-4f1a3cfff472", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25 became leader
	I0308 04:15:33.759915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-968261_e4968759-2460-4005-a070-ca4210c58f25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ljb42
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42: exit status 1 (69.106067ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ljb42" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-968261 describe pod metrics-server-57f55c9bc5-ljb42: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (501.17s)
E0308 04:38:48.482964  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:55.490583  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-416634 -n embed-certs-416634
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:34:35.988242845 +0000 UTC m=+5929.027151872
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-416634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-416634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.566µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-416634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-416634 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-416634 logs -n 25: (1.374759013s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC | 08 Mar 24 04:34 UTC |
	| start   | -p newest-cni-525359 --memory=2200 --alsologtostderr   | newest-cni-525359            | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC | 08 Mar 24 04:34 UTC |
	| start   | -p auto-678320 --memory=3072                           | auto-678320                  | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:34:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:34:10.970356  965524 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:34:10.970540  965524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:34:10.970568  965524 out.go:304] Setting ErrFile to fd 2...
	I0308 04:34:10.970583  965524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:34:10.971096  965524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:34:10.972214  965524 out.go:298] Setting JSON to false
	I0308 04:34:10.973583  965524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29777,"bootTime":1709842674,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:34:10.973650  965524 start.go:139] virtualization: kvm guest
	I0308 04:34:10.975665  965524 out.go:177] * [auto-678320] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:34:10.976958  965524 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:34:10.978248  965524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:34:10.977011  965524 notify.go:220] Checking for updates...
	I0308 04:34:10.980861  965524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:34:10.982626  965524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:34:10.983971  965524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:34:10.985377  965524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:34:10.987141  965524 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:34:10.987238  965524 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:34:10.987323  965524 config.go:182] Loaded profile config "newest-cni-525359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:34:10.987412  965524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:34:11.026866  965524 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:34:11.028173  965524 start.go:297] selected driver: kvm2
	I0308 04:34:11.028188  965524 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:34:11.028199  965524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:34:11.028905  965524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:34:11.028987  965524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:34:11.045372  965524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:34:11.045428  965524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 04:34:11.045692  965524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:34:11.045789  965524 cni.go:84] Creating CNI manager for ""
	I0308 04:34:11.045809  965524 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:34:11.045822  965524 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 04:34:11.045906  965524 start.go:340] cluster config:
	{Name:auto-678320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-678320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:34:11.046051  965524 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:34:11.047849  965524 out.go:177] * Starting "auto-678320" primary control-plane node in "auto-678320" cluster
	I0308 04:34:07.957809  965254 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 04:34:07.957941  965254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:34:07.957980  965254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:34:07.982743  965254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0308 04:34:07.984357  965254 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:34:07.985024  965254 main.go:141] libmachine: Using API Version  1
	I0308 04:34:07.985046  965254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:34:07.985597  965254 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:34:07.985791  965254 main.go:141] libmachine: (newest-cni-525359) Calling .GetMachineName
	I0308 04:34:07.985974  965254 main.go:141] libmachine: (newest-cni-525359) Calling .DriverName
	I0308 04:34:07.986140  965254 start.go:159] libmachine.API.Create for "newest-cni-525359" (driver="kvm2")
	I0308 04:34:07.986195  965254 client.go:168] LocalClient.Create starting
	I0308 04:34:07.986230  965254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem
	I0308 04:34:07.986262  965254 main.go:141] libmachine: Decoding PEM data...
	I0308 04:34:07.986276  965254 main.go:141] libmachine: Parsing certificate...
	I0308 04:34:07.986324  965254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem
	I0308 04:34:07.986347  965254 main.go:141] libmachine: Decoding PEM data...
	I0308 04:34:07.986357  965254 main.go:141] libmachine: Parsing certificate...
	I0308 04:34:07.986372  965254 main.go:141] libmachine: Running pre-create checks...
	I0308 04:34:07.986379  965254 main.go:141] libmachine: (newest-cni-525359) Calling .PreCreateCheck
	I0308 04:34:07.986788  965254 main.go:141] libmachine: (newest-cni-525359) Calling .GetConfigRaw
	I0308 04:34:07.987213  965254 main.go:141] libmachine: Creating machine...
	I0308 04:34:07.987228  965254 main.go:141] libmachine: (newest-cni-525359) Calling .Create
	I0308 04:34:07.987378  965254 main.go:141] libmachine: (newest-cni-525359) Creating KVM machine...
	I0308 04:34:07.988848  965254 main.go:141] libmachine: (newest-cni-525359) DBG | found existing default KVM network
	I0308 04:34:07.991124  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:07.990928  965300 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002720e0}
	I0308 04:34:07.991199  965254 main.go:141] libmachine: (newest-cni-525359) DBG | created network xml: 
	I0308 04:34:07.991222  965254 main.go:141] libmachine: (newest-cni-525359) DBG | <network>
	I0308 04:34:07.991236  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   <name>mk-newest-cni-525359</name>
	I0308 04:34:07.991251  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   <dns enable='no'/>
	I0308 04:34:07.991274  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   
	I0308 04:34:07.991294  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0308 04:34:07.991309  965254 main.go:141] libmachine: (newest-cni-525359) DBG |     <dhcp>
	I0308 04:34:07.991324  965254 main.go:141] libmachine: (newest-cni-525359) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0308 04:34:07.991345  965254 main.go:141] libmachine: (newest-cni-525359) DBG |     </dhcp>
	I0308 04:34:07.991356  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   </ip>
	I0308 04:34:07.991365  965254 main.go:141] libmachine: (newest-cni-525359) DBG |   
	I0308 04:34:07.991371  965254 main.go:141] libmachine: (newest-cni-525359) DBG | </network>
	I0308 04:34:07.991381  965254 main.go:141] libmachine: (newest-cni-525359) DBG | 
	I0308 04:34:07.996298  965254 main.go:141] libmachine: (newest-cni-525359) DBG | trying to create private KVM network mk-newest-cni-525359 192.168.39.0/24...
	I0308 04:34:08.076387  965254 main.go:141] libmachine: (newest-cni-525359) DBG | private KVM network mk-newest-cni-525359 192.168.39.0/24 created
	I0308 04:34:08.076420  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:08.076349  965300 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:34:08.076439  965254 main.go:141] libmachine: (newest-cni-525359) Setting up store path in /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359 ...
	I0308 04:34:08.076455  965254 main.go:141] libmachine: (newest-cni-525359) Building disk image from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 04:34:08.076691  965254 main.go:141] libmachine: (newest-cni-525359) Downloading /home/jenkins/minikube-integration/18333-911675/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 04:34:08.342872  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:08.342750  965300 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/id_rsa...
	I0308 04:34:08.547712  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:08.547585  965300 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/newest-cni-525359.rawdisk...
	I0308 04:34:08.547751  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Writing magic tar header
	I0308 04:34:08.547770  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Writing SSH key tar header
	I0308 04:34:08.547985  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:08.547884  965300 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359 ...
	I0308 04:34:08.548084  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359
	I0308 04:34:08.548132  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube/machines
	I0308 04:34:08.548147  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359 (perms=drwx------)
	I0308 04:34:08.548175  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube/machines (perms=drwxr-xr-x)
	I0308 04:34:08.548187  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675/.minikube (perms=drwxr-xr-x)
	I0308 04:34:08.548226  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins/minikube-integration/18333-911675 (perms=drwxrwxr-x)
	I0308 04:34:08.548245  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:34:08.548253  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0308 04:34:08.548268  965254 main.go:141] libmachine: (newest-cni-525359) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0308 04:34:08.548276  965254 main.go:141] libmachine: (newest-cni-525359) Creating domain...
	I0308 04:34:08.548285  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18333-911675
	I0308 04:34:08.548312  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0308 04:34:08.548331  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home/jenkins
	I0308 04:34:08.548351  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Checking permissions on dir: /home
	I0308 04:34:08.548359  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Skipping /home - not owner
	I0308 04:34:08.549657  965254 main.go:141] libmachine: (newest-cni-525359) define libvirt domain using xml: 
	I0308 04:34:08.549675  965254 main.go:141] libmachine: (newest-cni-525359) <domain type='kvm'>
	I0308 04:34:08.549686  965254 main.go:141] libmachine: (newest-cni-525359)   <name>newest-cni-525359</name>
	I0308 04:34:08.549694  965254 main.go:141] libmachine: (newest-cni-525359)   <memory unit='MiB'>2200</memory>
	I0308 04:34:08.549702  965254 main.go:141] libmachine: (newest-cni-525359)   <vcpu>2</vcpu>
	I0308 04:34:08.549710  965254 main.go:141] libmachine: (newest-cni-525359)   <features>
	I0308 04:34:08.549717  965254 main.go:141] libmachine: (newest-cni-525359)     <acpi/>
	I0308 04:34:08.549723  965254 main.go:141] libmachine: (newest-cni-525359)     <apic/>
	I0308 04:34:08.549731  965254 main.go:141] libmachine: (newest-cni-525359)     <pae/>
	I0308 04:34:08.549741  965254 main.go:141] libmachine: (newest-cni-525359)     
	I0308 04:34:08.549749  965254 main.go:141] libmachine: (newest-cni-525359)   </features>
	I0308 04:34:08.549755  965254 main.go:141] libmachine: (newest-cni-525359)   <cpu mode='host-passthrough'>
	I0308 04:34:08.549764  965254 main.go:141] libmachine: (newest-cni-525359)   
	I0308 04:34:08.549770  965254 main.go:141] libmachine: (newest-cni-525359)   </cpu>
	I0308 04:34:08.549778  965254 main.go:141] libmachine: (newest-cni-525359)   <os>
	I0308 04:34:08.549785  965254 main.go:141] libmachine: (newest-cni-525359)     <type>hvm</type>
	I0308 04:34:08.549794  965254 main.go:141] libmachine: (newest-cni-525359)     <boot dev='cdrom'/>
	I0308 04:34:08.549800  965254 main.go:141] libmachine: (newest-cni-525359)     <boot dev='hd'/>
	I0308 04:34:08.549809  965254 main.go:141] libmachine: (newest-cni-525359)     <bootmenu enable='no'/>
	I0308 04:34:08.549815  965254 main.go:141] libmachine: (newest-cni-525359)   </os>
	I0308 04:34:08.549822  965254 main.go:141] libmachine: (newest-cni-525359)   <devices>
	I0308 04:34:08.549830  965254 main.go:141] libmachine: (newest-cni-525359)     <disk type='file' device='cdrom'>
	I0308 04:34:08.549842  965254 main.go:141] libmachine: (newest-cni-525359)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/boot2docker.iso'/>
	I0308 04:34:08.549854  965254 main.go:141] libmachine: (newest-cni-525359)       <target dev='hdc' bus='scsi'/>
	I0308 04:34:08.549862  965254 main.go:141] libmachine: (newest-cni-525359)       <readonly/>
	I0308 04:34:08.549867  965254 main.go:141] libmachine: (newest-cni-525359)     </disk>
	I0308 04:34:08.549901  965254 main.go:141] libmachine: (newest-cni-525359)     <disk type='file' device='disk'>
	I0308 04:34:08.549927  965254 main.go:141] libmachine: (newest-cni-525359)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0308 04:34:08.549946  965254 main.go:141] libmachine: (newest-cni-525359)       <source file='/home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/newest-cni-525359.rawdisk'/>
	I0308 04:34:08.549959  965254 main.go:141] libmachine: (newest-cni-525359)       <target dev='hda' bus='virtio'/>
	I0308 04:34:08.549968  965254 main.go:141] libmachine: (newest-cni-525359)     </disk>
	I0308 04:34:08.549980  965254 main.go:141] libmachine: (newest-cni-525359)     <interface type='network'>
	I0308 04:34:08.549992  965254 main.go:141] libmachine: (newest-cni-525359)       <source network='mk-newest-cni-525359'/>
	I0308 04:34:08.550003  965254 main.go:141] libmachine: (newest-cni-525359)       <model type='virtio'/>
	I0308 04:34:08.550011  965254 main.go:141] libmachine: (newest-cni-525359)     </interface>
	I0308 04:34:08.550027  965254 main.go:141] libmachine: (newest-cni-525359)     <interface type='network'>
	I0308 04:34:08.550037  965254 main.go:141] libmachine: (newest-cni-525359)       <source network='default'/>
	I0308 04:34:08.550043  965254 main.go:141] libmachine: (newest-cni-525359)       <model type='virtio'/>
	I0308 04:34:08.550052  965254 main.go:141] libmachine: (newest-cni-525359)     </interface>
	I0308 04:34:08.550061  965254 main.go:141] libmachine: (newest-cni-525359)     <serial type='pty'>
	I0308 04:34:08.550070  965254 main.go:141] libmachine: (newest-cni-525359)       <target port='0'/>
	I0308 04:34:08.550080  965254 main.go:141] libmachine: (newest-cni-525359)     </serial>
	I0308 04:34:08.550111  965254 main.go:141] libmachine: (newest-cni-525359)     <console type='pty'>
	I0308 04:34:08.550131  965254 main.go:141] libmachine: (newest-cni-525359)       <target type='serial' port='0'/>
	I0308 04:34:08.550143  965254 main.go:141] libmachine: (newest-cni-525359)     </console>
	I0308 04:34:08.550163  965254 main.go:141] libmachine: (newest-cni-525359)     <rng model='virtio'>
	I0308 04:34:08.550177  965254 main.go:141] libmachine: (newest-cni-525359)       <backend model='random'>/dev/random</backend>
	I0308 04:34:08.550186  965254 main.go:141] libmachine: (newest-cni-525359)     </rng>
	I0308 04:34:08.550193  965254 main.go:141] libmachine: (newest-cni-525359)     
	I0308 04:34:08.550204  965254 main.go:141] libmachine: (newest-cni-525359)     
	I0308 04:34:08.550216  965254 main.go:141] libmachine: (newest-cni-525359)   </devices>
	I0308 04:34:08.550222  965254 main.go:141] libmachine: (newest-cni-525359) </domain>
	I0308 04:34:08.550236  965254 main.go:141] libmachine: (newest-cni-525359) 
	I0308 04:34:08.555338  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:7f:41:31 in network default
	I0308 04:34:08.556118  965254 main.go:141] libmachine: (newest-cni-525359) Ensuring networks are active...
	I0308 04:34:08.556143  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:08.556877  965254 main.go:141] libmachine: (newest-cni-525359) Ensuring network default is active
	I0308 04:34:08.557325  965254 main.go:141] libmachine: (newest-cni-525359) Ensuring network mk-newest-cni-525359 is active
	I0308 04:34:08.558024  965254 main.go:141] libmachine: (newest-cni-525359) Getting domain xml...
	I0308 04:34:08.558781  965254 main.go:141] libmachine: (newest-cni-525359) Creating domain...
	I0308 04:34:09.928561  965254 main.go:141] libmachine: (newest-cni-525359) Waiting to get IP...
	I0308 04:34:09.929284  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:09.929772  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:09.929841  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:09.929762  965300 retry.go:31] will retry after 236.625554ms: waiting for machine to come up
	I0308 04:34:10.642902  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:10.643460  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:10.643505  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:10.643427  965300 retry.go:31] will retry after 285.82519ms: waiting for machine to come up
	I0308 04:34:10.931109  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:10.931617  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:10.931664  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:10.931584  965300 retry.go:31] will retry after 480.540595ms: waiting for machine to come up
	I0308 04:34:11.414106  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:11.414568  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:11.414598  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:11.414539  965300 retry.go:31] will retry after 397.459011ms: waiting for machine to come up
	I0308 04:34:11.813245  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:11.813792  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:11.813844  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:11.813750  965300 retry.go:31] will retry after 479.695526ms: waiting for machine to come up
	I0308 04:34:12.295604  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:12.296188  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:12.296224  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:12.296131  965300 retry.go:31] will retry after 740.699858ms: waiting for machine to come up
	I0308 04:34:11.049168  965524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:34:11.049216  965524 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0308 04:34:11.049230  965524 cache.go:56] Caching tarball of preloaded images
	I0308 04:34:11.049391  965524 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:34:11.049407  965524 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0308 04:34:11.049531  965524 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/auto-678320/config.json ...
	I0308 04:34:11.049558  965524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/auto-678320/config.json: {Name:mkedc38d1f83d662832c6cd27a3430159e9c6aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:34:11.049747  965524 start.go:360] acquireMachinesLock for auto-678320: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:34:13.038644  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:13.039177  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:13.039226  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:13.039127  965300 retry.go:31] will retry after 1.045335426s: waiting for machine to come up
	I0308 04:34:14.086554  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:14.087028  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:14.087061  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:14.086965  965300 retry.go:31] will retry after 1.38271929s: waiting for machine to come up
	I0308 04:34:15.471653  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:15.472125  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:15.472155  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:15.472067  965300 retry.go:31] will retry after 1.639985208s: waiting for machine to come up
	I0308 04:34:17.113493  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:17.114064  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:17.114094  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:17.114007  965300 retry.go:31] will retry after 1.998301708s: waiting for machine to come up
	I0308 04:34:19.113685  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:19.114164  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:19.114193  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:19.114105  965300 retry.go:31] will retry after 2.50483445s: waiting for machine to come up
	I0308 04:34:21.621916  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:21.622474  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:21.622505  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:21.622400  965300 retry.go:31] will retry after 3.068159464s: waiting for machine to come up
	I0308 04:34:24.692986  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:24.693480  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:24.693504  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:24.693434  965300 retry.go:31] will retry after 3.331751025s: waiting for machine to come up
	I0308 04:34:28.026531  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:28.026971  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find current IP address of domain newest-cni-525359 in network mk-newest-cni-525359
	I0308 04:34:28.026993  965254 main.go:141] libmachine: (newest-cni-525359) DBG | I0308 04:34:28.026928  965300 retry.go:31] will retry after 4.564911338s: waiting for machine to come up
	I0308 04:34:32.596705  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:32.597180  965254 main.go:141] libmachine: (newest-cni-525359) Found IP for machine: 192.168.39.126
	I0308 04:34:32.597206  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has current primary IP address 192.168.39.126 and MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:32.597213  965254 main.go:141] libmachine: (newest-cni-525359) Reserving static IP address...
	I0308 04:34:32.597545  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find host DHCP lease matching {name: "newest-cni-525359", mac: "52:54:00:b9:38:a5", ip: "192.168.39.126"} in network mk-newest-cni-525359
	I0308 04:34:32.674442  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Getting to WaitForSSH function...
	I0308 04:34:32.674477  965254 main.go:141] libmachine: (newest-cni-525359) Reserved static IP address: 192.168.39.126
	I0308 04:34:32.674526  965254 main.go:141] libmachine: (newest-cni-525359) Waiting for SSH to be available...
	I0308 04:34:32.677641  965254 main.go:141] libmachine: (newest-cni-525359) DBG | domain newest-cni-525359 has defined MAC address 52:54:00:b9:38:a5 in network mk-newest-cni-525359
	I0308 04:34:32.678032  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b9:38:a5", ip: ""} in network mk-newest-cni-525359
	I0308 04:34:32.678059  965254 main.go:141] libmachine: (newest-cni-525359) DBG | unable to find defined IP address of network mk-newest-cni-525359 interface with MAC address 52:54:00:b9:38:a5
	I0308 04:34:32.678274  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Using SSH client type: external
	I0308 04:34:32.678305  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/id_rsa (-rw-------)
	I0308 04:34:32.678340  965254 main.go:141] libmachine: (newest-cni-525359) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/newest-cni-525359/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:34:32.678360  965254 main.go:141] libmachine: (newest-cni-525359) DBG | About to run SSH command:
	I0308 04:34:32.678411  965254 main.go:141] libmachine: (newest-cni-525359) DBG | exit 0
	I0308 04:34:32.682284  965254 main.go:141] libmachine: (newest-cni-525359) DBG | SSH cmd err, output: exit status 255: 
	I0308 04:34:32.682304  965254 main.go:141] libmachine: (newest-cni-525359) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0308 04:34:32.682324  965254 main.go:141] libmachine: (newest-cni-525359) DBG | command : exit 0
	I0308 04:34:32.682329  965254 main.go:141] libmachine: (newest-cni-525359) DBG | err     : exit status 255
	I0308 04:34:32.682336  965254 main.go:141] libmachine: (newest-cni-525359) DBG | output  : 
	
	
	==> CRI-O <==
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.709264162Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdb6401a-89de-4979-a4dd-590fbbd44341 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.710546688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0424813a-b605-4744-8908-9589c9fa05da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.711888773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872476711865663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0424813a-b605-4744-8908-9589c9fa05da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.712569793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dc8a80f-7d44-4efb-b89c-4b834b8732f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.712617868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dc8a80f-7d44-4efb-b89c-4b834b8732f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.712809890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dc8a80f-7d44-4efb-b89c-4b834b8732f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.737774627Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed858a1c-b013-40de-9c55-b2288996637c name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.737963100Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&PodSandboxMetadata{Name:kube-proxy-vc6p9,Uid:8b6e5755-2084-40ef-a128-1f4e04bf1ea6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871573086597655,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:19:30.969613787Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70558970b03d12ca391e7b85bcf7614328c47ebcea9e43e0b3c2e5c05ccb7aa0,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-kh9vr,Uid:eb205c10-4b89-499f-8cda-ad
ae031e374b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871572894250547,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-kh9vr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb205c10-4b89-499f-8cda-adae031e374b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:19:32.567152268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-t8z94,Uid:6f3d1519-9094-478a-80c5-a9fd11214336,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871572789102291,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,k8s-app: kube-dns,pod-template-hash: 5dd575
6b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:19:31.280793262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8b824332-34d7-477f-9db5-62d7fca45586,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871572779225399,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b824332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spe
c\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T04:19:32.470189064Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-h7p5l,Uid:72be5a70-ece6-4511-bef6-20fe746db41f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871572545290089,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db41f,k8s-app: kube-dns,pod-templa
te-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:19:31.334812312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-416634,Uid:92f4327b0cc2b6df0103b9e3f5c54e8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871552215689823,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 92f4327b0cc2b6df0103b9e3f5c54e8c,kubernetes.io/config.seen: 2024-03-08T04:19:11.716054089Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-416634,Uid:4cce26e170a4eb6ab13655e1514ded64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871552209892572,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4cce26e170a4eb6ab13655e1514ded64,kubernetes.io/config.seen: 2024-03-08T04:19:11.716053159Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-416634,Uid:d3e1adaf08926008c4ecd7a05a055794,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871552175623307,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.137:8443,kubernetes.io/config.hash: d3e1adaf08926008c4ecd7a05a055794,kubernetes.io/config.seen: 2024-03-08T04:19:11.716051967Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-416634,Uid:8ac5c089879fecf5f99d1bde5e04423f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871552171455815,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.137:2379,kubernetes.io/config.hash: 8ac5c089879fecf5f99d1bde5e04423f,kubernetes.io/config.seen: 2024-03-08T04:19:11.716047887Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ed858a1c-b013-40de-9c55-b2288996637c name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.738664698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=011b2fcd-69c1-42ab-8953-bacdf6c48491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.738716835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=011b2fcd-69c1-42ab-8953-bacdf6c48491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.739006013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=011b2fcd-69c1-42ab-8953-bacdf6c48491 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.755396163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b00e82c4-80d3-448a-bc78-2a61b0f486e5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.755450738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b00e82c4-80d3-448a-bc78-2a61b0f486e5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.756620821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de51cac7-4643-4e72-923c-91182e9d45c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.757141734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872476757122107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de51cac7-4643-4e72-923c-91182e9d45c1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.757838399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4554c8a-ecf0-4d64-96ef-2caffb07121a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.757883592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4554c8a-ecf0-4d64-96ef-2caffb07121a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.758226441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4554c8a-ecf0-4d64-96ef-2caffb07121a name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.807575024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17c4be26-128a-4f93-a1a2-2f3c4e2fdd6e name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.807646913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17c4be26-128a-4f93-a1a2-2f3c4e2fdd6e name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.809029951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b70f253f-a888-4036-8354-747142d882d4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.809540766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872476809512928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b70f253f-a888-4036-8354-747142d882d4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.810314904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=534dc0a3-902c-43c4-9f5d-59b05ef29e61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.810447693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=534dc0a3-902c-43c4-9f5d-59b05ef29e61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:36 embed-certs-416634 crio[696]: time="2024-03-08 04:34:36.810636362Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa,PodSandboxId:72cb54c01e4b80dc7eb3d90339c9db937c989cdc65220fbf464ca781ff78ef5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709871573494009975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vc6p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b6e5755-2084-40ef-a128-1f4e04bf1ea6,},Annotations:map[string]string{io.kubernetes.container.hash: e28c71c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3,PodSandboxId:d5ef238b507a97bccac1dd432066e01add5920f6b454a1913cc818317a8f52c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871573313596301,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8z94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f3d1519-9094-478a-80c5-a9fd11214336,},Annotations:map[string]string{io.kubernetes.container.hash: 7bb96b78,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f,PodSandboxId:0ef7e29efb1fc02414210c48a305df407460e87f87e36d29764dbfd065173104,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871573030278569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b8
24332-34d7-477f-9db5-62d7fca45586,},Annotations:map[string]string{io.kubernetes.container.hash: 297a7b6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710,PodSandboxId:a49b661206f86d961c19ba65f81b129b8d3ed5bac17d85077bbafdd4e3a6d9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709871572917168486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h7p5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72be5a70-ece6-4511-bef6-20fe746db4
1f,},Annotations:map[string]string{io.kubernetes.container.hash: fe4c0c00,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f,PodSandboxId:db04f4bffeb9ff437f429b82b23c974c08d2be52f005e63be2e584708bbaacc0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709871552537475211,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92f4327b0cc2b6df0103b9e3f5c54e8c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e,PodSandboxId:c5e1758c71ac9788841c34b788b1fcb2196f8c7ece6a6d510ce8b95aa81be129,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709871552474438825,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac5c089879fecf5f99d1bde5e04423f,},Annotations:map[string]string{io.kubernetes.container.hash: 2ec1d652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa,PodSandboxId:2850b1ddd7fe2ec62dcc4c8f0ded97af578a8adb23dd2fdc5f3a50a8d2a27b30,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709871552428432750,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cce26e170a4eb6ab13655e1514ded64,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409,PodSandboxId:2329d7c360fee2cade43351ea4135b1aeb6516c054b6a1c3d4092623f2736f6c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709871552350761873,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-416634,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e1adaf08926008c4ecd7a05a055794,},Annotations:map[string]string{io.kubernetes.container.hash: 59d577da,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=534dc0a3-902c-43c4-9f5d-59b05ef29e61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	069e0e7141e5c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   72cb54c01e4b8       kube-proxy-vc6p9
	22cf1eb102eca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   d5ef238b507a9       coredns-5dd5756b68-t8z94
	58a3351a84ed3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   0ef7e29efb1fc       storage-provisioner
	700108aa484b7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   a49b661206f86       coredns-5dd5756b68-h7p5l
	a19746274b80b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   db04f4bffeb9f       kube-scheduler-embed-certs-416634
	8b723d6ce5e40       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   c5e1758c71ac9       etcd-embed-certs-416634
	914f5c7bd0bf4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   2850b1ddd7fe2       kube-controller-manager-embed-certs-416634
	3796ec2c42925       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   2329d7c360fee       kube-apiserver-embed-certs-416634
	
	
	==> coredns [22cf1eb102eca18ad4a7d0e7db64d87e3ae78721c809425de1a82ada6d0d57d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [700108aa484b77a14528b21fea70464059944e1bce5398f0c7d2e21d23f72710] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-416634
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-416634
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=embed-certs-416634
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:19:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-416634
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:34:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:29:51 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:29:51 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:29:51 +0000   Fri, 08 Mar 2024 04:19:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:29:51 +0000   Fri, 08 Mar 2024 04:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.137
	  Hostname:    embed-certs-416634
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d07fdaff76b0452ea252cb050c19ef00
	  System UUID:                d07fdaff-76b0-452e-a252-cb050c19ef00
	  Boot ID:                    d48cc684-c130-4fc6-94f4-ef7b78e4b404
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-h7p5l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-t8z94                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-416634                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-416634             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-416634    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-vc6p9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-416634             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-kh9vr               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-416634 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-416634 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-416634 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-416634 event: Registered Node embed-certs-416634 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054483] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.553813] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar 8 04:14] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.729338] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.485164] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.056355] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066121] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.191730] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.137487] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.308272] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +5.649360] systemd-fstab-generator[779]: Ignoring "noauto" option for root device
	[  +0.062892] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.974997] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +5.629923] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.799238] kauditd_printk_skb: 74 callbacks suppressed
	[Mar 8 04:19] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	[  +4.739572] kauditd_printk_skb: 59 callbacks suppressed
	[  +2.567409] systemd-fstab-generator[3700]: Ignoring "noauto" option for root device
	[ +12.369009] systemd-fstab-generator[3889]: Ignoring "noauto" option for root device
	[  +0.112620] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 8 04:20] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [8b723d6ce5e40c8ac5058511d177f548290c130d265a3142e584506ee377364e] <==
	{"level":"info","ts":"2024-03-08T04:19:12.938526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-08T04:19:12.978418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgPreVoteResp from 53f1e4b6b2bc3c92 at term 1"}
	{"level":"info","ts":"2024-03-08T04:19:12.978549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.978555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgVoteResp from 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.978563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.97857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 53f1e4b6b2bc3c92 elected leader 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-08T04:19:12.982655Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:12.987491Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"53f1e4b6b2bc3c92","local-member-attributes":"{Name:embed-certs-416634 ClientURLs:[https://192.168.50.137:2379]}","request-path":"/0/members/53f1e4b6b2bc3c92/attributes","cluster-id":"7ac1a4431768b343","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:19:12.987984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:19:13.004681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.137:2379"}
	{"level":"info","ts":"2024-03-08T04:19:13.008742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:19:13.00953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011762Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011813Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:19:13.011301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:19:13.011827Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:19:13.013911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:29:13.237989Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":673}
	{"level":"info","ts":"2024-03-08T04:29:13.241561Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":673,"took":"3.06408ms","hash":3910114690}
	{"level":"info","ts":"2024-03-08T04:29:13.241633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3910114690,"revision":673,"compact-revision":-1}
	{"level":"info","ts":"2024-03-08T04:34:13.246237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":916}
	{"level":"info","ts":"2024-03-08T04:34:13.248277Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":916,"took":"1.32198ms","hash":2035839504}
	{"level":"info","ts":"2024-03-08T04:34:13.248448Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2035839504,"revision":916,"compact-revision":673}
	
	
	==> kernel <==
	 04:34:37 up 20 min,  0 users,  load average: 0.56, 0.31, 0.26
	Linux embed-certs-416634 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3796ec2c42925d5343bb98760689ee3258d19c5c80a6ec048e7f899c92de7409] <==
	E0308 04:30:16.396671       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:30:16.396695       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:31:15.278686       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 04:32:15.278558       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:32:16.395245       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:32:16.395394       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:32:16.395408       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:32:16.397460       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:32:16.397583       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:32:16.397616       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0308 04:33:15.278019       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0308 04:34:15.278292       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:34:15.400616       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:15.400763       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:34:15.401980       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0308 04:34:16.400868       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:16.400948       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:34:16.400960       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:34:16.401033       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:34:16.401142       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:34:16.402239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [914f5c7bd0bf4fc4e09e3effe6b9e70f92f24c98891a3462e8fba74cd11c79aa] <==
	I0308 04:29:01.029972       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:29:30.471179       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:29:31.040088       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:30:00.476976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:30:01.049439       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:30:30.486154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:30:31.059614       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:30:53.756670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="584.732µs"
	E0308 04:31:00.493164       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:01.068039       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:31:04.756026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="114.461µs"
	E0308 04:31:30.499051       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:31.077687       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:32:00.506995       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:01.087431       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:32:30.513773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:31.096518       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:00.520140       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:01.105102       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:30.527612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:31.114617       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:34:00.533719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:34:01.127292       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:34:30.542556       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:34:31.135510       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [069e0e7141e5c2913d561769d9c73c1f0193ab650671ba07402a2de0ef54e1fa] <==
	I0308 04:19:33.732454       1 server_others.go:69] "Using iptables proxy"
	I0308 04:19:33.750392       1 node.go:141] Successfully retrieved node IP: 192.168.50.137
	I0308 04:19:33.801946       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 04:19:33.801999       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:19:33.805996       1 server_others.go:152] "Using iptables Proxier"
	I0308 04:19:33.806976       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:19:33.807458       1 server.go:846] "Version info" version="v1.28.4"
	I0308 04:19:33.807504       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:19:33.809147       1 config.go:188] "Starting service config controller"
	I0308 04:19:33.809621       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:19:33.809700       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:19:33.809738       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:19:33.810776       1 config.go:315] "Starting node config controller"
	I0308 04:19:33.810853       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:19:33.912553       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:19:33.912783       1 shared_informer.go:318] Caches are synced for node config
	I0308 04:19:33.912937       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a19746274b80ba4a445a53c39156c793fca9da67033fbe6ece890abc6a5d4c3f] <==
	W0308 04:19:15.451618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 04:19:15.451266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:15.456775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:19:15.456785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:19:15.457024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:15.457041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 04:19:15.457164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 04:19:15.457759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.283818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:16.284515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.405943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 04:19:16.405997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 04:19:16.438496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 04:19:16.438793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 04:19:16.451813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:19:16.452001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 04:19:16.461414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:19:16.461478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 04:19:16.503290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 04:19:16.503481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 04:19:16.581094       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 04:19:16.581409       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:19:16.640289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 04:19:16.640420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0308 04:19:19.020518       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:32:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:32:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:32:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:32:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:32:29 embed-certs-416634 kubelet[3707]: E0308 04:32:29.738447    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:32:41 embed-certs-416634 kubelet[3707]: E0308 04:32:41.737818    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:32:54 embed-certs-416634 kubelet[3707]: E0308 04:32:54.737607    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:33:05 embed-certs-416634 kubelet[3707]: E0308 04:33:05.737892    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:33:17 embed-certs-416634 kubelet[3707]: E0308 04:33:17.737707    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:33:18 embed-certs-416634 kubelet[3707]: E0308 04:33:18.848959    3707 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:33:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:33:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:33:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:33:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:33:29 embed-certs-416634 kubelet[3707]: E0308 04:33:29.738965    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:33:41 embed-certs-416634 kubelet[3707]: E0308 04:33:41.737963    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:33:52 embed-certs-416634 kubelet[3707]: E0308 04:33:52.739246    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:34:06 embed-certs-416634 kubelet[3707]: E0308 04:34:06.745547    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:34:17 embed-certs-416634 kubelet[3707]: E0308 04:34:17.738084    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	Mar 08 04:34:18 embed-certs-416634 kubelet[3707]: E0308 04:34:18.848553    3707 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:34:18 embed-certs-416634 kubelet[3707]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:34:18 embed-certs-416634 kubelet[3707]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:34:18 embed-certs-416634 kubelet[3707]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:34:18 embed-certs-416634 kubelet[3707]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:34:30 embed-certs-416634 kubelet[3707]: E0308 04:34:30.737297    3707 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kh9vr" podUID="eb205c10-4b89-499f-8cda-adae031e374b"
	
	
	==> storage-provisioner [58a3351a84ed3e1e7356107defec762d56622525d4e036e94a03be0fe214ab0f] <==
	I0308 04:19:33.360293       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:19:33.457461       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:19:33.457545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:19:33.521485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:19:33.521708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a!
	I0308 04:19:33.522949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f51a459-45c6-4ffa-b48e-0e7a8212c146", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a became leader
	I0308 04:19:33.623954       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-416634_c39d1d1f-296e-4ecf-8242-f3259476372a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-416634 -n embed-certs-416634
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-416634 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kh9vr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr: exit status 1 (82.298518ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kh9vr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-416634 describe pod metrics-server-57f55c9bc5-kh9vr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (249.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477676 -n no-preload-477676
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-08 04:34:07.640142076 +0000 UTC m=+5900.679051110
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-477676 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-477676 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.185µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-477676 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-477676 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-477676 logs -n 25: (1.351846546s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC | 08 Mar 24 04:34 UTC |
	| start   | -p newest-cni-525359 --memory=2200 --alsologtostderr   | newest-cni-525359            | jenkins | v1.32.0 | 08 Mar 24 04:34 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:34:07
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:34:07.876423  965254 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:34:07.876571  965254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:34:07.876582  965254 out.go:304] Setting ErrFile to fd 2...
	I0308 04:34:07.876589  965254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:34:07.876900  965254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:34:07.877746  965254 out.go:298] Setting JSON to false
	I0308 04:34:07.879299  965254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":29774,"bootTime":1709842674,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:34:07.879392  965254 start.go:139] virtualization: kvm guest
	I0308 04:34:07.881845  965254 out.go:177] * [newest-cni-525359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:34:07.883049  965254 notify.go:220] Checking for updates...
	I0308 04:34:07.883060  965254 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:34:07.884163  965254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:34:07.885312  965254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:34:07.886426  965254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:34:07.887550  965254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:34:07.888733  965254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:34:07.890481  965254 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:34:07.890636  965254 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:34:07.890780  965254 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:34:07.890935  965254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:34:07.931324  965254 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:34:07.932596  965254 start.go:297] selected driver: kvm2
	I0308 04:34:07.932610  965254 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:34:07.932622  965254 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:34:07.933451  965254 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:34:07.933538  965254 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:34:07.951243  965254 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:34:07.951317  965254 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0308 04:34:07.951346  965254 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0308 04:34:07.951607  965254 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0308 04:34:07.951717  965254 cni.go:84] Creating CNI manager for ""
	I0308 04:34:07.951734  965254 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:34:07.951745  965254 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 04:34:07.951812  965254 start.go:340] cluster config:
	{Name:newest-cni-525359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-525359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:34:07.951965  965254 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:34:07.953762  965254 out.go:177] * Starting "newest-cni-525359" primary control-plane node in "newest-cni-525359" cluster
	I0308 04:34:07.954840  965254 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:34:07.954873  965254 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0308 04:34:07.954880  965254 cache.go:56] Caching tarball of preloaded images
	I0308 04:34:07.954970  965254 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:34:07.954982  965254 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0308 04:34:07.955083  965254 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/newest-cni-525359/config.json ...
	I0308 04:34:07.955102  965254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/newest-cni-525359/config.json: {Name:mkff879a2fb032ebb701a33458fec351631fa9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:34:07.955309  965254 start.go:360] acquireMachinesLock for newest-cni-525359: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:34:07.955341  965254 start.go:364] duration metric: took 17.846µs to acquireMachinesLock for "newest-cni-525359"
	I0308 04:34:07.955359  965254 start.go:93] Provisioning new machine with config: &{Name:newest-cni-525359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-525359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:34:07.955455  965254 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.400393597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79d9f544-8354-4804-af97-45810e9b41ae name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.401775071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=922b0c76-01d3-413e-81e4-6123043347b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.402635812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872448402612394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=922b0c76-01d3-413e-81e4-6123043347b2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.403196976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=187f0038-cf18-4b55-bdb5-54ee30e1d14d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.403243488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=187f0038-cf18-4b55-bdb5-54ee30e1d14d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.403413022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=187f0038-cf18-4b55-bdb5-54ee30e1d14d name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.453065079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11b59166-ea08-4702-bf3f-b7981dbad7f5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.453155722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11b59166-ea08-4702-bf3f-b7981dbad7f5 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.459734998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f75b10d9-55b9-4019-9467-29ccd77a9b72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.460615992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872448460591470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f75b10d9-55b9-4019-9467-29ccd77a9b72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.461729209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc0b1203-ec13-4069-9d23-2a10d8e01364 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.461910282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc0b1203-ec13-4069-9d23-2a10d8e01364 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.462109487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc0b1203-ec13-4069-9d23-2a10d8e01364 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.485287800Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=91fe56c7-a8fc-4144-83ef-207309bc96cb name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.485553680Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff8a52d515796ee1e93c820a9ddece348c13e9b5cdf2b45e770d65b290707295,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-756mf,Uid:3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871653823222101,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-756mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:20:53.498797751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871653665506221,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-08T04:20:53.356268832Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-hc8hb,Uid:2cfb86dd-0394-453d-92a7-b3c7f500cc5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871652423135877,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cfb86dd-0394-453d-92a7-b3c7f500cc5e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:20:51.485693687Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-kj6pn,Uid:48ed9c5f-0f19-4fc1-
be44-67dc8128f288,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871652422067652,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:20:51.458945836Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&PodSandboxMetadata{Name:kube-proxy-hr99w,Uid:568b12b2-3f01-4846-83fe-9d571ae15863,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871651796068055,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-08T04:20:50.868540704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-477676,Uid:0fc40e37d9fc58dcb8b231f9a7e60212,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871631720317313,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0fc40e37d9fc58dcb8b231f9a7e60212,kubernetes.io/config.seen: 2024-03-08T04:20:31.234798881Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-477676,Uid:3144899972be86020b3350370e80174f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871631695021360,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.214:8443,kubernetes.io/config.hash: 3144899972be86020b3350370e80174f,kubernetes.io/config.seen: 2024-03-08T04:20:31.234797164Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-477676,Uid:cd5f9d75d60e9327778ae89bf8c954f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871631693379507,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cd5f9d75d60e9327778ae89bf8c954f5,kubernetes.io/config.seen: 2024-03-08T04:20:31.234800019Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-477676,Uid:b6ac027e3862d734c1749b50c7e94bec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709871631665511215,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.214:237
9,kubernetes.io/config.hash: b6ac027e3862d734c1749b50c7e94bec,kubernetes.io/config.seen: 2024-03-08T04:20:31.234791941Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=91fe56c7-a8fc-4144-83ef-207309bc96cb name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.487983419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b307411-3f6a-4c46-acb4-5f0093bfd4c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.488063253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b307411-3f6a-4c46-acb4-5f0093bfd4c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.488252232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b307411-3f6a-4c46-acb4-5f0093bfd4c1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.516162823Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c8a0681-c314-4c2c-b835-3ede716de875 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.516286980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c8a0681-c314-4c2c-b835-3ede716de875 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.517722750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a19a73d7-bec2-4a19-9132-3ca4ce6a936c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.518195989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872448518164460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a19a73d7-bec2-4a19-9132-3ca4ce6a936c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.518906030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f41fd91e-0dc0-4b9e-8b30-762b8875b58b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.518980667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f41fd91e-0dc0-4b9e-8b30-762b8875b58b name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:08 no-preload-477676 crio[693]: time="2024-03-08 04:34:08.519163771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759,PodSandboxId:f610f2004d32799e1d51a8e07a253c0f03dc75831eae741aede633b7c349d1fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709871653824448094,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e,},Annotations:map[string]string{io.kubernetes.container.hash: 595135aa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7,PodSandboxId:0e327ddee7d06bd59df08718a1e7af1b9cdc07aa0d2cb094e87faf41049ce9a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652963115433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kj6pn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48ed9c5f-0f19-4fc1-be44-67dc8128f288,},Annotations:map[string]string{io.kubernetes.container.hash: cc476167,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348,PodSandboxId:6a15b4ce6825e26fc1b0820dcc56e9fabdda629c067aaefb8caf3f29613000c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709871652943639269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hc8hb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
fb86dd-0394-453d-92a7-b3c7f500cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: 1e235185,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845,PodSandboxId:33e7763cddb8980c8498d99f9a28d2b9980c94c0e9b6cce8cac9e112afd794df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1709871651959250592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr99w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568b12b2-3f01-4846-83fe-9d571ae15863,},Annotations:map[string]string{io.kubernetes.container.hash: 474d3502,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412,PodSandboxId:1ecd4469af9c643d8194410ff52d6317a0895a0afbd0268cb927a0bbc9eb2b14,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709871632034979594,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3144899972be86020b3350370e80174f,},Annotations:map[string]string{io.kubernetes.container.hash: ab8ebf08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486,PodSandboxId:a4d40053267ff3f1a7c1c3d3ccd01f324bc0b72d158409cd94d62de7c970a814,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709871631957200457,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd5f9d75d60e9327778ae89bf8c954f5,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1,PodSandboxId:d27d66099466c246437b2fcd9bc7a1284d70043144d55648ea8c1933565f84a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709871631958434280,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc40e37d9fc58dcb8b231f9a7e60212,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734,PodSandboxId:e2a3319dbe680c8aa557c7d47e5d4808694f210b0a739b9ecf3261f9d147ca9e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709871631864536874,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-477676,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac027e3862d734c1749b50c7e94bec,},Annotations:map[string]string{io.kubernetes.container.hash: d0a5f4d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f41fd91e-0dc0-4b9e-8b30-762b8875b58b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9cdfabb3cefbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   f610f2004d327       storage-provisioner
	d6369b2ee70d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   0e327ddee7d06       coredns-76f75df574-kj6pn
	415c28097fb28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   6a15b4ce6825e       coredns-76f75df574-hc8hb
	b2345362ee614       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   33e7763cddb89       kube-proxy-hr99w
	e301dc16dd6a1       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   1ecd4469af9c6       kube-apiserver-no-preload-477676
	c4be4bd9dfeb7       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   d27d66099466c       kube-controller-manager-no-preload-477676
	80e16eaa474ea       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   a4d40053267ff       kube-scheduler-no-preload-477676
	f32a6376e7a62       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   e2a3319dbe680       etcd-no-preload-477676
	
	
	==> coredns [415c28097fb2871f8d8eeb8f7cf83cd97e31f06de3c26eb3224652645ee64348] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d6369b2ee70d132a7f62c4fded7ad91707d8cf14af8999cfc967069c5011f9e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-477676
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-477676
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b
	                    minikube.k8s.io/name=no-preload-477676
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 04:20:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-477676
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 04:34:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 04:31:10 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 04:31:10 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 04:31:10 +0000   Fri, 08 Mar 2024 04:20:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 04:31:10 +0000   Fri, 08 Mar 2024 04:20:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.214
	  Hostname:    no-preload-477676
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ee474f5d38b412f97d44586a1c6295d
	  System UUID:                0ee474f5-d38b-412f-97d4-4586a1c6295d
	  Boot ID:                    5a090d92-5599-4ca0-8e46-294782b3c871
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hc8hb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-kj6pn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-477676                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-477676             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-477676    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hr99w                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-477676             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-756mf              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-477676 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-477676 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-477676 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-477676 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-477676 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-477676 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-477676 event: Registered Node no-preload-477676 in Controller
	
	
	==> dmesg <==
	[  +0.055608] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047285] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar 8 04:15] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.621469] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.750428] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.474086] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.060671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071882] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.219808] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.146954] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.258947] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[ +17.368653] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.074529] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.475552] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +4.603960] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.845110] kauditd_printk_skb: 74 callbacks suppressed
	[Mar 8 04:20] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.403400] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[  +4.619184] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.669708] systemd-fstab-generator[4172]: Ignoring "noauto" option for root device
	[ +13.866624] systemd-fstab-generator[4385]: Ignoring "noauto" option for root device
	[  +0.069194] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 8 04:21] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f32a6376e7a62c88afb64e0bc54ad55b1a329c6651a96458d46a98699a115734] <==
	{"level":"info","ts":"2024-03-08T04:20:32.260725Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ae03a842fc865c93","initial-advertise-peer-urls":["https://192.168.72.214:2380"],"listen-peer-urls":["https://192.168.72.214:2380"],"advertise-client-urls":["https://192.168.72.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T04:20:32.260739Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.214:2380"}
	{"level":"info","ts":"2024-03-08T04:20:32.260922Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.214:2380"}
	{"level":"info","ts":"2024-03-08T04:20:32.260975Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T04:20:33.173093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.174908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.175063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 received MsgPreVoteResp from ae03a842fc865c93 at term 1"}
	{"level":"info","ts":"2024-03-08T04:20:33.175191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became candidate at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 received MsgVoteResp from ae03a842fc865c93 at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae03a842fc865c93 became leader at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.175447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ae03a842fc865c93 elected leader ae03a842fc865c93 at term 2"}
	{"level":"info","ts":"2024-03-08T04:20:33.180191Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ae03a842fc865c93","local-member-attributes":"{Name:no-preload-477676 ClientURLs:[https://192.168.72.214:2379]}","request-path":"/0/members/ae03a842fc865c93/attributes","cluster-id":"4d2f25243ca737f5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T04:20:33.182897Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.183069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:20:33.183524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T04:20:33.187538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T04:20:33.189688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.214:2379"}
	{"level":"info","ts":"2024-03-08T04:20:33.189966Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T04:20:33.190007Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T04:20:33.190057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4d2f25243ca737f5","local-member-id":"ae03a842fc865c93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.190181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:20:33.190243Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T04:30:33.244371Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-03-08T04:30:33.247296Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":712,"took":"2.498617ms","hash":1528808112}
	{"level":"info","ts":"2024-03-08T04:30:33.247368Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1528808112,"revision":712,"compact-revision":-1}
	
	
	==> kernel <==
	 04:34:08 up 19 min,  0 users,  load average: 0.60, 0.29, 0.21
	Linux no-preload-477676 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e301dc16dd6a1ca20cabbc5132845ecf9b5c51aaaf005ea935a0c76e8c9fb412] <==
	I0308 04:28:35.801499       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:30:34.802956       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:30:34.803086       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0308 04:30:35.803982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:30:35.804088       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:30:35.804117       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:30:35.804180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:30:35.804258       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:30:35.805509       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:31:35.805265       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:31:35.805341       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:31:35.805352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:31:35.806577       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:31:35.806887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:31:35.806957       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:33:35.806129       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:33:35.806413       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0308 04:33:35.806446       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0308 04:33:35.807661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0308 04:33:35.807783       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0308 04:33:35.807809       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c4be4bd9dfeb7c5580d288e606044c2e0e031589ea9375e351b2ca4f4b6824b1] <==
	I0308 04:28:21.007621       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:28:50.455440       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:28:51.018307       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:29:20.461662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:29:21.027724       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:29:50.469732       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:29:51.036789       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:30:20.474806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:30:21.047250       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:30:50.481147       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:30:51.055706       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:31:20.487327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:21.063157       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:31:50.493412       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:31:51.071742       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0308 04:32:03.344700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="372.123µs"
	I0308 04:32:16.343334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="105.539µs"
	E0308 04:32:20.499397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:21.082754       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:32:50.505134       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:32:51.092126       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:20.510379       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:21.100564       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0308 04:33:50.519901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0308 04:33:51.110771       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b2345362ee614e2719992d0c8f8f68f4584bebb7844a392d18d25b186495d845] <==
	I0308 04:20:52.167156       1 server_others.go:72] "Using iptables proxy"
	I0308 04:20:52.188198       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.214"]
	I0308 04:20:52.284557       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0308 04:20:52.284608       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 04:20:52.284622       1 server_others.go:168] "Using iptables Proxier"
	I0308 04:20:52.299350       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 04:20:52.299611       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0308 04:20:52.299651       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 04:20:52.302722       1 config.go:188] "Starting service config controller"
	I0308 04:20:52.302768       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 04:20:52.302785       1 config.go:97] "Starting endpoint slice config controller"
	I0308 04:20:52.302789       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 04:20:52.303990       1 config.go:315] "Starting node config controller"
	I0308 04:20:52.304025       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 04:20:52.402986       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 04:20:52.403066       1 shared_informer.go:318] Caches are synced for service config
	I0308 04:20:52.404059       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [80e16eaa474ea8b6fa58d05299ed9ef1bf7060d22eff85e682c937ad2ff41486] <==
	W0308 04:20:34.845230       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 04:20:34.845355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 04:20:34.845389       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:20:34.845511       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 04:20:34.845934       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 04:20:34.846078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 04:20:35.643949       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 04:20:35.644029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 04:20:35.673037       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 04:20:35.673093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 04:20:35.700806       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 04:20:35.701050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 04:20:35.741028       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 04:20:35.741255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 04:20:35.806295       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 04:20:35.806421       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 04:20:35.862644       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 04:20:35.862697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 04:20:35.966953       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 04:20:35.967101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 04:20:35.968414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 04:20:35.968564       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 04:20:36.027325       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 04:20:36.027382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0308 04:20:38.720924       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 04:31:39 no-preload-477676 kubelet[4179]: E0308 04:31:39.323254    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:31:51 no-preload-477676 kubelet[4179]: E0308 04:31:51.341894    4179 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 08 04:31:51 no-preload-477676 kubelet[4179]: E0308 04:31:51.342075    4179 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 08 04:31:51 no-preload-477676 kubelet[4179]: E0308 04:31:51.343096    4179 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4pgzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-756mf_kube-system(3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 08 04:31:51 no-preload-477676 kubelet[4179]: E0308 04:31:51.343199    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:32:03 no-preload-477676 kubelet[4179]: E0308 04:32:03.323623    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:32:16 no-preload-477676 kubelet[4179]: E0308 04:32:16.326611    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:32:31 no-preload-477676 kubelet[4179]: E0308 04:32:31.324325    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:32:38 no-preload-477676 kubelet[4179]: E0308 04:32:38.360946    4179 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:32:38 no-preload-477676 kubelet[4179]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:32:38 no-preload-477676 kubelet[4179]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:32:38 no-preload-477676 kubelet[4179]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:32:38 no-preload-477676 kubelet[4179]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:32:42 no-preload-477676 kubelet[4179]: E0308 04:32:42.323676    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:32:57 no-preload-477676 kubelet[4179]: E0308 04:32:57.324484    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:33:08 no-preload-477676 kubelet[4179]: E0308 04:33:08.323638    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:33:21 no-preload-477676 kubelet[4179]: E0308 04:33:21.323396    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:33:35 no-preload-477676 kubelet[4179]: E0308 04:33:35.325600    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:33:38 no-preload-477676 kubelet[4179]: E0308 04:33:38.361552    4179 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 04:33:38 no-preload-477676 kubelet[4179]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 04:33:38 no-preload-477676 kubelet[4179]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 04:33:38 no-preload-477676 kubelet[4179]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 04:33:38 no-preload-477676 kubelet[4179]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 04:33:46 no-preload-477676 kubelet[4179]: E0308 04:33:46.323361    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	Mar 08 04:34:01 no-preload-477676 kubelet[4179]: E0308 04:34:01.323740    4179 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-756mf" podUID="3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f"
	
	
	==> storage-provisioner [9cdfabb3cefbb3f0299de16529aad82d4e50f6098abfb683046ac8b80f8c2759] <==
	I0308 04:20:53.970999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0308 04:20:54.005946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0308 04:20:54.006015       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0308 04:20:54.035778       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d7ae5d8-3d11-424b-913d-7f8abac3e49d", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f became leader
	I0308 04:20:54.032409       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0308 04:20:54.036331       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f!
	I0308 04:20:54.137182       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-477676_1b41c5e5-e5b7-4f60-ac08-890ed8ad457f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-477676 -n no-preload-477676
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-477676 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-756mf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf: exit status 1 (80.091998ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-756mf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-477676 describe pod metrics-server-57f55c9bc5-756mf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (249.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (115.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:32:52.008763  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
E0308 04:33:32.256812  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.3:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.3:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (258.426238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-496808" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-496808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-496808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.677µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-496808 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (242.832133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-496808 logs -n 25: (1.539111121s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-219954                           | kubernetes-upgrade-219954    | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-292856                            | force-systemd-env-292856     | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:04 UTC |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:04 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:05 UTC | 08 Mar 24 04:06 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-401581                              | cert-expiration-401581       | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-030050 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | disable-driver-mounts-030050                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:07 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-477676             | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-416634            | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC | 08 Mar 24 04:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-968261  | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC | 08 Mar 24 04:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:07 UTC |                     |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-496808        | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-477676                  | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-477676                                   | no-preload-477676            | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-416634                 | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-416634                                  | embed-certs-416634           | jenkins | v1.32.0 | 08 Mar 24 04:09 UTC | 08 Mar 24 04:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-968261       | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-968261 | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:19 UTC |
	|         | default-k8s-diff-port-968261                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-496808             | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC | 08 Mar 24 04:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-496808                              | old-k8s-version-496808       | jenkins | v1.32.0 | 08 Mar 24 04:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 04:10:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 04:10:19.147604  959882 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:10:19.147716  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147725  959882 out.go:304] Setting ErrFile to fd 2...
	I0308 04:10:19.147729  959882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:10:19.147921  959882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:10:19.148465  959882 out.go:298] Setting JSON to false
	I0308 04:10:19.149449  959882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":28345,"bootTime":1709842674,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:10:19.149519  959882 start.go:139] virtualization: kvm guest
	I0308 04:10:19.152544  959882 out.go:177] * [old-k8s-version-496808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:10:19.154011  959882 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:10:19.155284  959882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:10:19.154046  959882 notify.go:220] Checking for updates...
	I0308 04:10:19.156633  959882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:10:19.157942  959882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:10:19.159101  959882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:10:19.160245  959882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:10:19.161717  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:10:19.162126  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.162184  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.176782  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0308 04:10:19.177120  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.177713  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.177740  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.178102  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.178344  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.179897  959882 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0308 04:10:19.181157  959882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:10:19.181459  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:10:19.181490  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:10:19.195517  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0308 04:10:19.195932  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:10:19.196314  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:10:19.196327  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:10:19.196658  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:10:19.196823  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:10:19.230064  959882 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 04:10:19.231288  959882 start.go:297] selected driver: kvm2
	I0308 04:10:19.231303  959882 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.231418  959882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:10:19.232078  959882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.232156  959882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 04:10:19.246188  959882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 04:10:19.246544  959882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:10:19.246629  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:10:19.246646  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:10:19.246702  959882 start.go:340] cluster config:
	{Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:10:19.246819  959882 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 04:10:19.248446  959882 out.go:177] * Starting "old-k8s-version-496808" primary control-plane node in "old-k8s-version-496808" cluster
	I0308 04:10:19.249434  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:10:19.249468  959882 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 04:10:19.249492  959882 cache.go:56] Caching tarball of preloaded images
	I0308 04:10:19.249572  959882 preload.go:173] Found /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0308 04:10:19.249585  959882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0308 04:10:19.249692  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:10:19.249886  959882 start.go:360] acquireMachinesLock for old-k8s-version-496808: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:10:22.257497  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:25.329577  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:31.409555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:34.481658  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:40.561728  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:43.633590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:49.713567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:52.785626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:10:58.865518  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:01.937626  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:08.017522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:11.089580  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:17.169531  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:20.241547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:26.321539  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:29.393549  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:35.473561  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:38.545522  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:44.625534  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:47.697619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:53.777527  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:11:56.849560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:02.929535  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:06.001490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:12.081519  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:15.153493  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:21.233556  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:24.305555  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:30.385581  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:33.457558  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:39.537572  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:42.609490  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:48.689657  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:51.761546  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:12:57.841567  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:00.913668  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:06.993589  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:10.065596  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:16.145635  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:19.217598  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:25.297590  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:28.369619  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:34.449516  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:37.521547  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:43.601560  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:46.673550  959302 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.214:22: connect: no route to host
	I0308 04:13:49.677993  959419 start.go:364] duration metric: took 4m26.689245413s to acquireMachinesLock for "embed-certs-416634"
	I0308 04:13:49.678109  959419 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:13:49.678120  959419 fix.go:54] fixHost starting: 
	I0308 04:13:49.678501  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:13:49.678534  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:13:49.694476  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0308 04:13:49.694945  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:13:49.695410  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:13:49.695431  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:13:49.695789  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:13:49.696025  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:13:49.696169  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:13:49.697810  959419 fix.go:112] recreateIfNeeded on embed-certs-416634: state=Stopped err=<nil>
	I0308 04:13:49.697832  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	W0308 04:13:49.698008  959419 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:13:49.699819  959419 out.go:177] * Restarting existing kvm2 VM for "embed-certs-416634" ...
	I0308 04:13:49.675276  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:13:49.675316  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.675748  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:13:49.675778  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:13:49.676001  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:13:49.677825  959302 machine.go:97] duration metric: took 4m37.413037133s to provisionDockerMachine
	I0308 04:13:49.677876  959302 fix.go:56] duration metric: took 4m37.43406s for fixHost
	I0308 04:13:49.677885  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 4m37.434086663s
	W0308 04:13:49.677910  959302 start.go:713] error starting host: provision: host is not running
	W0308 04:13:49.678151  959302 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0308 04:13:49.678170  959302 start.go:728] Will try again in 5 seconds ...
	I0308 04:13:49.701182  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Start
	I0308 04:13:49.701405  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring networks are active...
	I0308 04:13:49.702223  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network default is active
	I0308 04:13:49.702613  959419 main.go:141] libmachine: (embed-certs-416634) Ensuring network mk-embed-certs-416634 is active
	I0308 04:13:49.703033  959419 main.go:141] libmachine: (embed-certs-416634) Getting domain xml...
	I0308 04:13:49.703856  959419 main.go:141] libmachine: (embed-certs-416634) Creating domain...
	I0308 04:13:50.892756  959419 main.go:141] libmachine: (embed-certs-416634) Waiting to get IP...
	I0308 04:13:50.893644  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:50.894118  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:50.894223  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:50.894098  960410 retry.go:31] will retry after 279.194711ms: waiting for machine to come up
	I0308 04:13:51.175574  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.176475  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.176502  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.176427  960410 retry.go:31] will retry after 389.469955ms: waiting for machine to come up
	I0308 04:13:51.567091  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.567481  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.567513  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.567432  960410 retry.go:31] will retry after 429.64835ms: waiting for machine to come up
	I0308 04:13:51.999052  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:51.999436  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:51.999459  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:51.999394  960410 retry.go:31] will retry after 442.533269ms: waiting for machine to come up
	I0308 04:13:52.443930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.444415  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.444447  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.444346  960410 retry.go:31] will retry after 523.764229ms: waiting for machine to come up
	I0308 04:13:54.678350  959302 start.go:360] acquireMachinesLock for no-preload-477676: {Name:mkbe5f6692e9dd9c44a0d74f7d275f14772a7948 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 04:13:52.970050  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:52.970473  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:52.970516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:52.970415  960410 retry.go:31] will retry after 935.926663ms: waiting for machine to come up
	I0308 04:13:53.907612  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:53.907999  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:53.908030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:53.907962  960410 retry.go:31] will retry after 754.083585ms: waiting for machine to come up
	I0308 04:13:54.663901  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:54.664365  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:54.664395  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:54.664299  960410 retry.go:31] will retry after 1.102565731s: waiting for machine to come up
	I0308 04:13:55.768872  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:55.769340  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:55.769369  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:55.769296  960410 retry.go:31] will retry after 1.133721347s: waiting for machine to come up
	I0308 04:13:56.904589  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:56.905030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:56.905058  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:56.904998  960410 retry.go:31] will retry after 2.006442316s: waiting for machine to come up
	I0308 04:13:58.914300  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:13:58.914857  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:13:58.914886  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:13:58.914816  960410 retry.go:31] will retry after 2.539946779s: waiting for machine to come up
	I0308 04:14:01.457035  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:01.457530  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:01.457562  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:01.457447  960410 retry.go:31] will retry after 2.2953096s: waiting for machine to come up
	I0308 04:14:03.756109  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:03.756564  959419 main.go:141] libmachine: (embed-certs-416634) DBG | unable to find current IP address of domain embed-certs-416634 in network mk-embed-certs-416634
	I0308 04:14:03.756601  959419 main.go:141] libmachine: (embed-certs-416634) DBG | I0308 04:14:03.756510  960410 retry.go:31] will retry after 3.924376528s: waiting for machine to come up
	I0308 04:14:07.683974  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684387  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has current primary IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.684407  959419 main.go:141] libmachine: (embed-certs-416634) Found IP for machine: 192.168.50.137
	I0308 04:14:07.684426  959419 main.go:141] libmachine: (embed-certs-416634) Reserving static IP address...
	I0308 04:14:07.684862  959419 main.go:141] libmachine: (embed-certs-416634) Reserved static IP address: 192.168.50.137
	I0308 04:14:07.684932  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.684955  959419 main.go:141] libmachine: (embed-certs-416634) Waiting for SSH to be available...
	I0308 04:14:07.684986  959419 main.go:141] libmachine: (embed-certs-416634) DBG | skip adding static IP to network mk-embed-certs-416634 - found existing host DHCP lease matching {name: "embed-certs-416634", mac: "52:54:00:5a:68:e3", ip: "192.168.50.137"}
	I0308 04:14:07.685001  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Getting to WaitForSSH function...
	I0308 04:14:07.687389  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687724  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.687753  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.687843  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH client type: external
	I0308 04:14:07.687876  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa (-rw-------)
	I0308 04:14:07.687911  959419 main.go:141] libmachine: (embed-certs-416634) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:07.687930  959419 main.go:141] libmachine: (embed-certs-416634) DBG | About to run SSH command:
	I0308 04:14:07.687943  959419 main.go:141] libmachine: (embed-certs-416634) DBG | exit 0
	I0308 04:14:07.809426  959419 main.go:141] libmachine: (embed-certs-416634) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:07.809863  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetConfigRaw
	I0308 04:14:07.810513  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:07.812923  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813297  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.813333  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.813545  959419 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/config.json ...
	I0308 04:14:07.813730  959419 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:07.813748  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:07.813951  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.816302  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816701  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.816734  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.816941  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.817157  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817354  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.817493  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.817675  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.818030  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.818043  959419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:09.122426  959713 start.go:364] duration metric: took 3m55.69774533s to acquireMachinesLock for "default-k8s-diff-port-968261"
	I0308 04:14:09.122512  959713 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:09.122522  959713 fix.go:54] fixHost starting: 
	I0308 04:14:09.122937  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:09.122983  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:09.139672  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0308 04:14:09.140140  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:09.140622  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:09.140648  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:09.140987  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:09.141156  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:09.141296  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:09.142853  959713 fix.go:112] recreateIfNeeded on default-k8s-diff-port-968261: state=Stopped err=<nil>
	I0308 04:14:09.142895  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	W0308 04:14:09.143058  959713 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:09.145167  959713 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-968261" ...
	I0308 04:14:07.917810  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:07.917842  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918120  959419 buildroot.go:166] provisioning hostname "embed-certs-416634"
	I0308 04:14:07.918150  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:07.918378  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:07.921033  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921409  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:07.921450  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:07.921585  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:07.921782  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922064  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:07.922225  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:07.922412  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:07.922585  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:07.922605  959419 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-416634 && echo "embed-certs-416634" | sudo tee /etc/hostname
	I0308 04:14:08.036882  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-416634
	
	I0308 04:14:08.036914  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.039668  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040029  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.040064  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.040168  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.040398  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040563  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.040719  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.040863  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.041038  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.041055  959419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-416634' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-416634/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-416634' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:08.148126  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:08.148167  959419 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:08.148196  959419 buildroot.go:174] setting up certificates
	I0308 04:14:08.148210  959419 provision.go:84] configureAuth start
	I0308 04:14:08.148223  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetMachineName
	I0308 04:14:08.148522  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:08.151261  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151643  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.151675  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.151801  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.154383  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154803  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.154832  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.154990  959419 provision.go:143] copyHostCerts
	I0308 04:14:08.155050  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:08.155065  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:08.155178  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:08.155306  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:08.155317  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:08.155345  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:08.155404  959419 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:08.155411  959419 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:08.155431  959419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:08.155488  959419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.embed-certs-416634 san=[127.0.0.1 192.168.50.137 embed-certs-416634 localhost minikube]
	I0308 04:14:08.429503  959419 provision.go:177] copyRemoteCerts
	I0308 04:14:08.429579  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:08.429609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.432704  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433030  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.433062  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.433209  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.433430  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.433666  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.433825  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.511628  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:08.543751  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0308 04:14:08.576231  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:14:08.608819  959419 provision.go:87] duration metric: took 460.594888ms to configureAuth
	I0308 04:14:08.608849  959419 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:08.609041  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:08.609134  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.612139  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612510  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.612563  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.612781  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.613003  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613197  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.613396  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.613617  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:08.613805  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:08.613826  959419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:08.891898  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:08.891954  959419 machine.go:97] duration metric: took 1.078186177s to provisionDockerMachine
	I0308 04:14:08.891972  959419 start.go:293] postStartSetup for "embed-certs-416634" (driver="kvm2")
	I0308 04:14:08.891988  959419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:08.892022  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:08.892410  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:08.892452  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:08.895116  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895498  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:08.895537  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:08.895637  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:08.895836  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:08.896054  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:08.896230  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:08.976479  959419 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:08.981537  959419 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:08.981565  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:08.981641  959419 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:08.981730  959419 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:08.981841  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:08.991619  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:09.018124  959419 start.go:296] duration metric: took 126.137563ms for postStartSetup
	I0308 04:14:09.018171  959419 fix.go:56] duration metric: took 19.340048389s for fixHost
	I0308 04:14:09.018205  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.020650  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021012  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.021040  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.021190  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.021394  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021591  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.021746  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.021907  959419 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:09.022082  959419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0308 04:14:09.022093  959419 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:09.122257  959419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871249.091803486
	
	I0308 04:14:09.122286  959419 fix.go:216] guest clock: 1709871249.091803486
	I0308 04:14:09.122297  959419 fix.go:229] Guest: 2024-03-08 04:14:09.091803486 +0000 UTC Remote: 2024-03-08 04:14:09.01818642 +0000 UTC m=+286.175988249 (delta=73.617066ms)
	I0308 04:14:09.122326  959419 fix.go:200] guest clock delta is within tolerance: 73.617066ms
	I0308 04:14:09.122335  959419 start.go:83] releasing machines lock for "embed-certs-416634", held for 19.444293643s
	I0308 04:14:09.122369  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.122676  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:09.125553  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.125925  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.125953  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.126089  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126642  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126828  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:14:09.126910  959419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:09.126971  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.127092  959419 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:09.127130  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:14:09.129516  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129839  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.129879  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.129902  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130067  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130247  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130279  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:09.130306  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:09.130410  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130496  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:14:09.130568  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.130644  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:14:09.130840  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:14:09.130984  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:14:09.238125  959419 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:09.245265  959419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:09.399185  959419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:09.406549  959419 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:09.406620  959419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:09.424848  959419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:09.424869  959419 start.go:494] detecting cgroup driver to use...
	I0308 04:14:09.424921  959419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:09.441591  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:09.455401  959419 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:09.455456  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:09.470229  959419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:09.484898  959419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:09.616292  959419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:09.777173  959419 docker.go:233] disabling docker service ...
	I0308 04:14:09.777244  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:09.794692  959419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:09.808732  959419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:09.955827  959419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:10.081307  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:10.097126  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:10.123352  959419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:10.123423  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.137096  959419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:10.137154  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.155204  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.168133  959419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:10.179827  959419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:10.192025  959419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:10.202768  959419 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:10.202822  959419 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:10.228536  959419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:10.241192  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:10.381504  959419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:10.538512  959419 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:10.538603  959419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:10.544342  959419 start.go:562] Will wait 60s for crictl version
	I0308 04:14:10.544408  959419 ssh_runner.go:195] Run: which crictl
	I0308 04:14:10.549096  959419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:10.594001  959419 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:10.594117  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.633643  959419 ssh_runner.go:195] Run: crio --version
	I0308 04:14:10.688427  959419 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:10.689773  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetIP
	I0308 04:14:10.692847  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693339  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:14:10.693377  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:14:10.693591  959419 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:10.698326  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:10.712628  959419 kubeadm.go:877] updating cluster {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:10.712804  959419 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:10.712877  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:10.750752  959419 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:10.750841  959419 ssh_runner.go:195] Run: which lz4
	I0308 04:14:10.755586  959419 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:10.760484  959419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:10.760517  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:12.767008  959419 crio.go:444] duration metric: took 2.011460838s to copy over tarball
	I0308 04:14:12.767093  959419 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:09.146531  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Start
	I0308 04:14:09.146714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring networks are active...
	I0308 04:14:09.147381  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network default is active
	I0308 04:14:09.147745  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Ensuring network mk-default-k8s-diff-port-968261 is active
	I0308 04:14:09.148126  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Getting domain xml...
	I0308 04:14:09.148805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Creating domain...
	I0308 04:14:10.379399  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting to get IP...
	I0308 04:14:10.380389  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380789  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.380921  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.380796  960528 retry.go:31] will retry after 198.268951ms: waiting for machine to come up
	I0308 04:14:10.580709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581392  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.581426  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.581330  960528 retry.go:31] will retry after 390.203073ms: waiting for machine to come up
	I0308 04:14:10.972958  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973435  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:10.973468  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:10.973387  960528 retry.go:31] will retry after 381.931996ms: waiting for machine to come up
	I0308 04:14:11.357210  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357873  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.357905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.357844  960528 retry.go:31] will retry after 596.150639ms: waiting for machine to come up
	I0308 04:14:11.955528  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956055  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:11.956081  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:11.956020  960528 retry.go:31] will retry after 654.908309ms: waiting for machine to come up
	I0308 04:14:12.612989  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:12.613596  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:12.613512  960528 retry.go:31] will retry after 580.027629ms: waiting for machine to come up
	I0308 04:14:13.195534  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196100  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:13.196129  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:13.196050  960528 retry.go:31] will retry after 894.798416ms: waiting for machine to come up
	I0308 04:14:15.621654  959419 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.85452265s)
	I0308 04:14:15.621686  959419 crio.go:451] duration metric: took 2.854647891s to extract the tarball
	I0308 04:14:15.621695  959419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:15.665579  959419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:15.714582  959419 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:15.714610  959419 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:15.714620  959419 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.28.4 crio true true} ...
	I0308 04:14:15.714732  959419 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-416634 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:15.714820  959419 ssh_runner.go:195] Run: crio config
	I0308 04:14:15.781052  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:15.781083  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:15.781100  959419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:15.781144  959419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-416634 NodeName:embed-certs-416634 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:15.781360  959419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-416634"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:15.781431  959419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:15.793432  959419 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:15.793501  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:15.804828  959419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0308 04:14:15.825333  959419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:15.844895  959419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0308 04:14:15.865301  959419 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:15.870152  959419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:15.885352  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:16.033266  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:16.053365  959419 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634 for IP: 192.168.50.137
	I0308 04:14:16.053423  959419 certs.go:194] generating shared ca certs ...
	I0308 04:14:16.053446  959419 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:16.053638  959419 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:16.053693  959419 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:16.053705  959419 certs.go:256] generating profile certs ...
	I0308 04:14:16.053833  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/client.key
	I0308 04:14:16.053913  959419 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key.cba3d6eb
	I0308 04:14:16.053964  959419 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key
	I0308 04:14:16.054136  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:16.054188  959419 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:16.054204  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:16.054240  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:16.054269  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:16.054306  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:16.054368  959419 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:16.055395  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:16.116956  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:16.154530  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:16.207843  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:16.243292  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0308 04:14:16.274088  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:16.303282  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:16.330383  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/embed-certs-416634/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 04:14:16.357588  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:16.384542  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:16.411546  959419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:16.438516  959419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:16.457624  959419 ssh_runner.go:195] Run: openssl version
	I0308 04:14:16.464186  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:16.476917  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482045  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.482115  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:16.488508  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:16.500910  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:16.513841  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.518944  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.519007  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:16.526348  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:16.539347  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:16.551509  959419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556518  959419 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.556572  959419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:16.562911  959419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:16.576145  959419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:16.581678  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:16.588581  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:16.595463  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:16.602816  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:16.610355  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:16.617384  959419 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:16.624197  959419 kubeadm.go:391] StartCluster: {Name:embed-certs-416634 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-416634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:16.624306  959419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:16.624355  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.672923  959419 cri.go:89] found id: ""
	I0308 04:14:16.673008  959419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:16.686528  959419 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:16.686556  959419 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:16.686563  959419 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:16.686622  959419 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:16.699511  959419 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:16.700611  959419 kubeconfig.go:125] found "embed-certs-416634" server: "https://192.168.50.137:8443"
	I0308 04:14:16.703118  959419 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:16.716025  959419 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0308 04:14:16.716060  959419 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:16.716073  959419 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:16.716116  959419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:16.757485  959419 cri.go:89] found id: ""
	I0308 04:14:16.757565  959419 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:16.776775  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:16.788550  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:16.788575  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:16.788632  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:14:16.801057  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:16.801123  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:16.811900  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:14:16.824313  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:16.824393  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:16.837444  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.849598  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:16.849672  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:16.862257  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:14:16.874408  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:16.874474  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:16.887013  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:16.899466  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.021096  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:17.852168  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:14.092025  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092524  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:14.092561  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:14.092448  960528 retry.go:31] will retry after 934.086419ms: waiting for machine to come up
	I0308 04:14:15.027939  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:15.028395  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:15.028293  960528 retry.go:31] will retry after 1.545954169s: waiting for machine to come up
	I0308 04:14:16.575766  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:16.576359  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:16.576204  960528 retry.go:31] will retry after 1.481043374s: waiting for machine to come up
	I0308 04:14:18.058872  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059405  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:18.059434  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:18.059352  960528 retry.go:31] will retry after 2.066038273s: waiting for machine to come up
	I0308 04:14:18.090297  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.182409  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:18.303014  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:18.303148  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:18.804103  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.304050  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:19.340961  959419 api_server.go:72] duration metric: took 1.037946207s to wait for apiserver process to appear ...
	I0308 04:14:19.341004  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:19.341033  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:19.341662  959419 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0308 04:14:19.841401  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.568435  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.568481  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.568499  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.629777  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:14:22.629822  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:14:22.841157  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:22.846414  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:22.846449  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:20.127790  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:20.128267  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:20.128178  960528 retry.go:31] will retry after 2.369650681s: waiting for machine to come up
	I0308 04:14:22.500360  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500882  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:22.500922  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:22.500828  960528 retry.go:31] will retry after 2.776534272s: waiting for machine to come up
	I0308 04:14:23.341752  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.364004  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:23.364039  959419 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:23.841571  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:14:23.852597  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:14:23.866960  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:23.866993  959419 api_server.go:131] duration metric: took 4.525980761s to wait for apiserver health ...
	I0308 04:14:23.867020  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:14:23.867027  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:23.868578  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:23.869890  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:23.920732  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:23.954757  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:23.966806  959419 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:23.966842  959419 system_pods.go:61] "coredns-5dd5756b68-mqz25" [6e84375d-ebb8-4a73-b9d6-186a1c0b252a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:23.966848  959419 system_pods.go:61] "etcd-embed-certs-416634" [12d1e1ed-a8d4-4bde-a745-ba0b9a73d534] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:23.966855  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [79fad05e-3143-4c3d-ba19-1d9ee43ff605] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:23.966861  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [4535fe51-1c1e-47f3-8c5a-997816b7efd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:23.966870  959419 system_pods.go:61] "kube-proxy-jrd8g" [7fc2dcb7-3b3e-49d7-92de-0ac3fd6e0716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:14:23.966877  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [a9dcd10e-a5b7-4505-96da-ef4db6ca2a6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:23.966886  959419 system_pods.go:61] "metrics-server-57f55c9bc5-qnq74" [ff63a265-3425-4503-b6a1-701d891bfdb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:23.966900  959419 system_pods.go:61] "storage-provisioner" [c7e33a73-af18-42f6-b0f3-950755716ffa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:14:23.966907  959419 system_pods.go:74] duration metric: took 12.122358ms to wait for pod list to return data ...
	I0308 04:14:23.966918  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:23.973509  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:23.973557  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:23.973573  959419 node_conditions.go:105] duration metric: took 6.650555ms to run NodePressure ...
	I0308 04:14:23.973591  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:24.278263  959419 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282882  959419 kubeadm.go:733] kubelet initialised
	I0308 04:14:24.282905  959419 kubeadm.go:734] duration metric: took 4.615279ms waiting for restarted kubelet to initialise ...
	I0308 04:14:24.282914  959419 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:24.288430  959419 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:26.295272  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:25.279330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279694  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | unable to find current IP address of domain default-k8s-diff-port-968261 in network mk-default-k8s-diff-port-968261
	I0308 04:14:25.279718  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | I0308 04:14:25.279660  960528 retry.go:31] will retry after 3.612867708s: waiting for machine to come up
	I0308 04:14:30.264299  959882 start.go:364] duration metric: took 4m11.01437395s to acquireMachinesLock for "old-k8s-version-496808"
	I0308 04:14:30.264380  959882 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:30.264396  959882 fix.go:54] fixHost starting: 
	I0308 04:14:30.264871  959882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:30.264919  959882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:30.285246  959882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0308 04:14:30.285774  959882 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:30.286369  959882 main.go:141] libmachine: Using API Version  1
	I0308 04:14:30.286396  959882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:30.286857  959882 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:30.287118  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:30.287318  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetState
	I0308 04:14:30.289239  959882 fix.go:112] recreateIfNeeded on old-k8s-version-496808: state=Stopped err=<nil>
	I0308 04:14:30.289306  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	W0308 04:14:30.289500  959882 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:30.291273  959882 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-496808" ...
	I0308 04:14:28.895308  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.895714  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Found IP for machine: 192.168.61.32
	I0308 04:14:28.895733  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserving static IP address...
	I0308 04:14:28.895746  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has current primary IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.896167  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Reserved static IP address: 192.168.61.32
	I0308 04:14:28.896194  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Waiting for SSH to be available...
	I0308 04:14:28.896216  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.896247  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | skip adding static IP to network mk-default-k8s-diff-port-968261 - found existing host DHCP lease matching {name: "default-k8s-diff-port-968261", mac: "52:54:00:21:5e:5d", ip: "192.168.61.32"}
	I0308 04:14:28.896266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Getting to WaitForSSH function...
	I0308 04:14:28.898469  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898838  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:28.898875  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:28.898975  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH client type: external
	I0308 04:14:28.899012  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa (-rw-------)
	I0308 04:14:28.899052  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:28.899072  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | About to run SSH command:
	I0308 04:14:28.899087  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | exit 0
	I0308 04:14:29.021433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:29.021814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetConfigRaw
	I0308 04:14:29.022449  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.025154  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025550  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.025582  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.025814  959713 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/config.json ...
	I0308 04:14:29.025989  959713 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:29.026007  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:29.026208  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.028617  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.028990  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.029032  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.029145  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.029341  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029510  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.029646  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.029830  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.030093  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.030110  959713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:29.138251  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:29.138277  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138589  959713 buildroot.go:166] provisioning hostname "default-k8s-diff-port-968261"
	I0308 04:14:29.138620  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.138825  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.141241  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141671  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.141700  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.141805  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.142001  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142189  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.142345  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.142562  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.142777  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.142794  959713 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-968261 && echo "default-k8s-diff-port-968261" | sudo tee /etc/hostname
	I0308 04:14:29.260874  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-968261
	
	I0308 04:14:29.260911  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.263743  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264039  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.264064  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.264266  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.264466  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264639  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.264774  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.264937  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.265128  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.265146  959713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-968261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-968261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-968261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:29.380491  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:29.380543  959713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:29.380611  959713 buildroot.go:174] setting up certificates
	I0308 04:14:29.380623  959713 provision.go:84] configureAuth start
	I0308 04:14:29.380642  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetMachineName
	I0308 04:14:29.380936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:29.383965  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384382  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.384407  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.384584  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.387364  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387756  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.387779  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.387979  959713 provision.go:143] copyHostCerts
	I0308 04:14:29.388056  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:29.388071  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:29.388151  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:29.388261  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:29.388278  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:29.388299  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:29.388366  959713 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:29.388376  959713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:29.388393  959713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:29.388450  959713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-968261 san=[127.0.0.1 192.168.61.32 default-k8s-diff-port-968261 localhost minikube]
	I0308 04:14:29.555846  959713 provision.go:177] copyRemoteCerts
	I0308 04:14:29.555909  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:29.555936  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.558924  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559307  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.559340  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.559575  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.559793  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.559929  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.560012  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:29.644666  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:29.672934  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:29.700093  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0308 04:14:29.729516  959713 provision.go:87] duration metric: took 348.870469ms to configureAuth
	I0308 04:14:29.729556  959713 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:29.729751  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:29.729836  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:29.732377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732699  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:29.732727  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:29.732961  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:29.733169  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733365  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:29.733521  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:29.733686  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:29.733862  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:29.733880  959713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:30.021001  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:30.021034  959713 machine.go:97] duration metric: took 995.031559ms to provisionDockerMachine
	I0308 04:14:30.021047  959713 start.go:293] postStartSetup for "default-k8s-diff-port-968261" (driver="kvm2")
	I0308 04:14:30.021058  959713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:30.021076  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.021447  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:30.021491  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.024433  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024834  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.024864  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.024970  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.025218  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.025439  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.025615  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.110006  959713 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:30.115165  959713 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:30.115200  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:30.115302  959713 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:30.115387  959713 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:30.115473  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:30.126492  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:30.154474  959713 start.go:296] duration metric: took 133.4126ms for postStartSetup
	I0308 04:14:30.154539  959713 fix.go:56] duration metric: took 21.032017223s for fixHost
	I0308 04:14:30.154578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.157526  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.157919  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.157963  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.158123  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.158327  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158503  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.158633  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.158790  959713 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:30.158960  959713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.32 22 <nil> <nil>}
	I0308 04:14:30.158971  959713 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:30.264074  959713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871270.245462646
	
	I0308 04:14:30.264137  959713 fix.go:216] guest clock: 1709871270.245462646
	I0308 04:14:30.264151  959713 fix.go:229] Guest: 2024-03-08 04:14:30.245462646 +0000 UTC Remote: 2024-03-08 04:14:30.154552705 +0000 UTC m=+256.879640562 (delta=90.909941ms)
	I0308 04:14:30.264183  959713 fix.go:200] guest clock delta is within tolerance: 90.909941ms
	I0308 04:14:30.264192  959713 start.go:83] releasing machines lock for "default-k8s-diff-port-968261", held for 21.141704885s
	I0308 04:14:30.264239  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.264558  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:30.268288  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.268775  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.268823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.269080  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.269826  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270070  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:30.270179  959713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:30.270230  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.270314  959713 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:30.270377  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:30.273322  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273441  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273778  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273814  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:30.273852  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.273870  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:30.274056  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274062  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:30.274238  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274295  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:30.274384  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274463  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:30.274568  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.274607  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:30.378714  959713 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:30.385679  959713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:30.537456  959713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:30.544554  959713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:30.544625  959713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:30.563043  959713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:30.563076  959713 start.go:494] detecting cgroup driver to use...
	I0308 04:14:30.563179  959713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:30.586681  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:30.604494  959713 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:30.604594  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:30.621898  959713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:30.638813  959713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:30.781035  959713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:30.977466  959713 docker.go:233] disabling docker service ...
	I0308 04:14:30.977525  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:30.997813  959713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:31.014090  959713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:31.150946  959713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:31.284860  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:31.303494  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:31.326276  959713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:14:31.326334  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.339316  959713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:31.339394  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.352403  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.364833  959713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:31.377212  959713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:31.390281  959713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:31.401356  959713 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:31.401411  959713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:31.418014  959713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:31.430793  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:31.588906  959713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:31.753574  959713 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:31.753679  959713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:31.760197  959713 start.go:562] Will wait 60s for crictl version
	I0308 04:14:31.760275  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:14:31.765221  959713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:31.808519  959713 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:31.808617  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.843005  959713 ssh_runner.go:195] Run: crio --version
	I0308 04:14:31.882248  959713 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0308 04:14:28.795547  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:30.798305  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:32.799326  959419 pod_ready.go:102] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:31.883483  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetIP
	I0308 04:14:31.886744  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887197  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:31.887234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:31.887484  959713 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:31.892933  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:31.908685  959713 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:31.908810  959713 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0308 04:14:31.908868  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:31.955475  959713 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0308 04:14:31.955542  959713 ssh_runner.go:195] Run: which lz4
	I0308 04:14:31.960342  959713 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:31.965386  959713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:31.965422  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0308 04:14:30.292890  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .Start
	I0308 04:14:30.293092  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring networks are active...
	I0308 04:14:30.294119  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network default is active
	I0308 04:14:30.295816  959882 main.go:141] libmachine: (old-k8s-version-496808) Ensuring network mk-old-k8s-version-496808 is active
	I0308 04:14:30.296369  959882 main.go:141] libmachine: (old-k8s-version-496808) Getting domain xml...
	I0308 04:14:30.297252  959882 main.go:141] libmachine: (old-k8s-version-496808) Creating domain...
	I0308 04:14:31.579755  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting to get IP...
	I0308 04:14:31.580656  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.581036  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.581171  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.581002  960659 retry.go:31] will retry after 309.874279ms: waiting for machine to come up
	I0308 04:14:31.892442  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:31.892969  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:31.892994  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:31.892906  960659 retry.go:31] will retry after 306.154564ms: waiting for machine to come up
	I0308 04:14:32.200717  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.201418  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.201441  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.201372  960659 retry.go:31] will retry after 370.879608ms: waiting for machine to come up
	I0308 04:14:32.574149  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:32.574676  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:32.574727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:32.574629  960659 retry.go:31] will retry after 503.11856ms: waiting for machine to come up
	I0308 04:14:33.080123  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.080686  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.080719  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.080630  960659 retry.go:31] will retry after 729.770563ms: waiting for machine to come up
	I0308 04:14:33.811643  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:33.812137  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:33.812176  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:33.812099  960659 retry.go:31] will retry after 817.312971ms: waiting for machine to come up
	I0308 04:14:34.296966  959419 pod_ready.go:92] pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.296996  959419 pod_ready.go:81] duration metric: took 10.008542587s for pod "coredns-5dd5756b68-mqz25" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.297011  959419 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306856  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:34.306881  959419 pod_ready.go:81] duration metric: took 9.861757ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.306891  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.322913  959419 pod_ready.go:102] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:36.815072  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.815106  959419 pod_ready.go:81] duration metric: took 2.508207009s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.815127  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822068  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.822097  959419 pod_ready.go:81] duration metric: took 6.960492ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.822110  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828570  959419 pod_ready.go:92] pod "kube-proxy-jrd8g" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.828600  959419 pod_ready.go:81] duration metric: took 6.48188ms for pod "kube-proxy-jrd8g" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.828612  959419 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835002  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:36.835032  959419 pod_ready.go:81] duration metric: took 6.410979ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:36.835045  959419 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:34.051815  959713 crio.go:444] duration metric: took 2.091503353s to copy over tarball
	I0308 04:14:34.051897  959713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:37.052484  959713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.000548217s)
	I0308 04:14:37.052526  959713 crio.go:451] duration metric: took 3.00067861s to extract the tarball
	I0308 04:14:37.052537  959713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:14:37.111317  959713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:37.165154  959713 crio.go:496] all images are preloaded for cri-o runtime.
	I0308 04:14:37.165182  959713 cache_images.go:84] Images are preloaded, skipping loading
	I0308 04:14:37.165191  959713 kubeadm.go:928] updating node { 192.168.61.32 8444 v1.28.4 crio true true} ...
	I0308 04:14:37.165362  959713 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-968261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:14:37.165464  959713 ssh_runner.go:195] Run: crio config
	I0308 04:14:37.232251  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:37.232286  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:37.232320  959713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:14:37.232356  959713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.32 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-968261 NodeName:default-k8s-diff-port-968261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:14:37.232550  959713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-968261"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:14:37.232624  959713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 04:14:37.247819  959713 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:14:37.247882  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:14:37.258136  959713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0308 04:14:37.278170  959713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:14:37.296984  959713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0308 04:14:37.317501  959713 ssh_runner.go:195] Run: grep 192.168.61.32	control-plane.minikube.internal$ /etc/hosts
	I0308 04:14:37.322272  959713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:37.336534  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:37.482010  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:37.503034  959713 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261 for IP: 192.168.61.32
	I0308 04:14:37.503061  959713 certs.go:194] generating shared ca certs ...
	I0308 04:14:37.503085  959713 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:37.503275  959713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:14:37.503337  959713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:14:37.503350  959713 certs.go:256] generating profile certs ...
	I0308 04:14:37.503455  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.key
	I0308 04:14:37.692181  959713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key.909e253b
	I0308 04:14:37.692334  959713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key
	I0308 04:14:37.692504  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:14:37.692552  959713 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:14:37.692567  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:14:37.692613  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:14:37.692658  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:14:37.692702  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:14:37.692756  959713 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:37.693700  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:14:37.729960  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:14:37.759343  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:14:37.786779  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:14:37.813620  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0308 04:14:37.843520  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:14:37.871677  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:14:37.899574  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:14:37.928175  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:14:37.956297  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:14:37.983110  959713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:14:38.013258  959713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:14:38.035666  959713 ssh_runner.go:195] Run: openssl version
	I0308 04:14:38.042548  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:14:38.055810  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061027  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.061076  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:14:38.067420  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:14:38.080321  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:14:38.092963  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098055  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.098099  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:14:38.104529  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:14:38.117473  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:14:38.130239  959713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135231  959713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.135294  959713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:14:38.141511  959713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:14:38.156136  959713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:14:38.161082  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:14:38.167816  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:14:38.174337  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:14:38.181239  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:14:38.187989  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:14:38.194320  959713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:14:38.202773  959713 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-968261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-968261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:14:38.202907  959713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:14:38.202964  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:38.249552  959713 cri.go:89] found id: ""
	I0308 04:14:38.249661  959713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:14:38.262277  959713 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:14:38.262305  959713 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:14:38.262312  959713 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:14:38.262368  959713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:14:38.276080  959713 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:14:38.277166  959713 kubeconfig.go:125] found "default-k8s-diff-port-968261" server: "https://192.168.61.32:8444"
	I0308 04:14:38.279595  959713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:14:38.291483  959713 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.32
	I0308 04:14:38.291522  959713 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:14:38.291539  959713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:14:38.291597  959713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:14:34.631134  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:34.631593  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:34.631624  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:34.631539  960659 retry.go:31] will retry after 800.453151ms: waiting for machine to come up
	I0308 04:14:35.434243  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:35.434723  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:35.434755  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:35.434660  960659 retry.go:31] will retry after 1.486974488s: waiting for machine to come up
	I0308 04:14:36.923377  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:36.923823  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:36.923860  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:36.923771  960659 retry.go:31] will retry after 1.603577122s: waiting for machine to come up
	I0308 04:14:38.529600  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:38.530061  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:38.530087  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:38.530020  960659 retry.go:31] will retry after 2.055793486s: waiting for machine to come up
	I0308 04:14:38.985685  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:41.344340  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:38.339059  959713 cri.go:89] found id: ""
	I0308 04:14:38.400166  959713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:14:38.427474  959713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:14:38.443270  959713 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:14:38.443295  959713 kubeadm.go:156] found existing configuration files:
	
	I0308 04:14:38.443350  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0308 04:14:38.457643  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:14:38.457731  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:14:38.469552  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0308 04:14:38.480889  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:14:38.480954  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:14:38.492753  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.504207  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:14:38.504263  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:14:38.515461  959713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0308 04:14:38.525921  959713 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:14:38.525973  959713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:14:38.537732  959713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:14:38.549220  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:38.685924  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.425996  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.647834  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.751001  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:39.864518  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:14:39.864651  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.364923  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.865347  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:14:40.950999  959713 api_server.go:72] duration metric: took 1.086480958s to wait for apiserver process to appear ...
	I0308 04:14:40.951036  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:14:40.951064  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.951732  959713 api_server.go:269] stopped: https://192.168.61.32:8444/healthz: Get "https://192.168.61.32:8444/healthz": dial tcp 192.168.61.32:8444: connect: connection refused
	I0308 04:14:41.451391  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:40.587291  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:40.587859  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:40.587895  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:40.587801  960659 retry.go:31] will retry after 1.975105776s: waiting for machine to come up
	I0308 04:14:42.566105  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:42.566639  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:42.566671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:42.566584  960659 retry.go:31] will retry after 2.508884013s: waiting for machine to come up
	I0308 04:14:44.502748  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.502791  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.502813  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.519733  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.519779  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:44.951896  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:44.956977  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:44.957014  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.451561  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.457255  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:14:45.457304  959713 api_server.go:103] status: https://192.168.61.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:14:45.951515  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:14:45.956760  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:14:45.967364  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:14:45.967395  959713 api_server.go:131] duration metric: took 5.016350679s to wait for apiserver health ...
	I0308 04:14:45.967404  959713 cni.go:84] Creating CNI manager for ""
	I0308 04:14:45.967412  959713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:14:45.969020  959713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:14:45.970842  959713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:14:45.983807  959713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:14:46.002371  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:14:46.026300  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:14:46.026336  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:14:46.026344  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:14:46.026350  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:14:46.026361  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:14:46.026365  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:14:46.026372  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:14:46.026376  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:14:46.026380  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:14:46.026388  959713 system_pods.go:74] duration metric: took 23.994961ms to wait for pod list to return data ...
	I0308 04:14:46.026399  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:14:46.030053  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:14:46.030080  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:14:46.030095  959713 node_conditions.go:105] duration metric: took 3.690947ms to run NodePressure ...
	I0308 04:14:46.030117  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:14:46.250414  959713 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256492  959713 kubeadm.go:733] kubelet initialised
	I0308 04:14:46.256512  959713 kubeadm.go:734] duration metric: took 6.067616ms waiting for restarted kubelet to initialise ...
	I0308 04:14:46.256521  959713 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:46.261751  959713 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.268095  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268126  959713 pod_ready.go:81] duration metric: took 6.349898ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.268139  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.268148  959713 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.279644  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279675  959713 pod_ready.go:81] duration metric: took 11.518686ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.279686  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.279691  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.285549  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285578  959713 pod_ready.go:81] duration metric: took 5.878548ms for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.285592  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.285604  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.406507  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406537  959713 pod_ready.go:81] duration metric: took 120.920366ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.406549  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.406555  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:46.807550  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807579  959713 pod_ready.go:81] duration metric: took 401.017434ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:46.807589  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-proxy-qpxcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:46.807597  959713 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.207852  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207886  959713 pod_ready.go:81] duration metric: took 400.280849ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.207903  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.207910  959713 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:47.608634  959713 pod_ready.go:97] node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608662  959713 pod_ready.go:81] duration metric: took 400.74455ms for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:14:47.608674  959713 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-968261" hosting pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:47.608680  959713 pod_ready.go:38] duration metric: took 1.352150807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:47.608697  959713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:14:47.622064  959713 ops.go:34] apiserver oom_adj: -16
	I0308 04:14:47.622090  959713 kubeadm.go:591] duration metric: took 9.359769706s to restartPrimaryControlPlane
	I0308 04:14:47.622099  959713 kubeadm.go:393] duration metric: took 9.419338829s to StartCluster
	I0308 04:14:47.622121  959713 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.622212  959713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:14:47.624288  959713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:14:47.624540  959713 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.32 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:14:47.626481  959713 out.go:177] * Verifying Kubernetes components...
	I0308 04:14:47.624641  959713 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:14:47.624854  959713 config.go:182] Loaded profile config "default-k8s-diff-port-968261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:14:47.626597  959713 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628017  959713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:47.628022  959713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-968261"
	I0308 04:14:47.626599  959713 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628187  959713 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628200  959713 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:14:47.626598  959713 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-968261"
	I0308 04:14:47.628279  959713 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.628289  959713 addons.go:243] addon metrics-server should already be in state true
	I0308 04:14:47.628312  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628237  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.628559  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628601  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628658  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.628687  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.628690  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.644741  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0308 04:14:47.645311  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646423  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0308 04:14:47.646435  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I0308 04:14:47.646849  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.646871  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.646926  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.646933  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.647282  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647462  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647485  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647623  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.647664  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.647822  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.647940  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.647986  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.648024  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.648043  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.648550  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.648576  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.651653  959713 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-968261"
	W0308 04:14:47.651673  959713 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:14:47.651701  959713 host.go:66] Checking if "default-k8s-diff-port-968261" exists ...
	I0308 04:14:47.651983  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.652018  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.664562  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0308 04:14:47.665175  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.665856  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.665872  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.665942  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0308 04:14:47.666109  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45091
	I0308 04:14:47.666305  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666418  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.666451  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.666607  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.666801  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.666836  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.666990  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.667008  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.667119  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.667240  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.667792  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.668541  959713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:47.668600  959713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:47.668827  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.671180  959713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:14:47.669242  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.672820  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:14:47.672842  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:14:47.672865  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.674732  959713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:14:43.347393  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:45.843053  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.844076  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:47.676187  959713 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.676205  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:14:47.676232  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.675606  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676304  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.676330  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.676396  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.676578  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.676709  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.676828  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.678747  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679211  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.679234  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.679339  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.679517  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.679644  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.679767  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.684943  959713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0308 04:14:47.685247  959713 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:47.685778  959713 main.go:141] libmachine: Using API Version  1
	I0308 04:14:47.685797  959713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:47.686151  959713 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:47.686348  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetState
	I0308 04:14:47.687638  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .DriverName
	I0308 04:14:47.687895  959713 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:47.687913  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:14:47.687931  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHHostname
	I0308 04:14:47.690795  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691321  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5e:5d", ip: ""} in network mk-default-k8s-diff-port-968261: {Iface:virbr4 ExpiryTime:2024-03-08 05:14:21 +0000 UTC Type:0 Mac:52:54:00:21:5e:5d Iaid: IPaddr:192.168.61.32 Prefix:24 Hostname:default-k8s-diff-port-968261 Clientid:01:52:54:00:21:5e:5d}
	I0308 04:14:47.691353  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | domain default-k8s-diff-port-968261 has defined IP address 192.168.61.32 and MAC address 52:54:00:21:5e:5d in network mk-default-k8s-diff-port-968261
	I0308 04:14:47.691741  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHPort
	I0308 04:14:47.691898  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHKeyPath
	I0308 04:14:47.692045  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .GetSSHUsername
	I0308 04:14:47.692233  959713 sshutil.go:53] new ssh client: &{IP:192.168.61.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/default-k8s-diff-port-968261/id_rsa Username:docker}
	I0308 04:14:47.836814  959713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:14:47.858400  959713 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:47.928515  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:14:47.933619  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:14:48.023215  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:14:48.023252  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:14:48.083274  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:14:48.083305  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:14:48.144920  959713 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:48.144961  959713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:14:48.168221  959713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:14:45.076659  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:45.077146  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:45.077180  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:45.077084  960659 retry.go:31] will retry after 3.488591872s: waiting for machine to come up
	I0308 04:14:48.567653  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:48.568101  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | unable to find current IP address of domain old-k8s-version-496808 in network mk-old-k8s-version-496808
	I0308 04:14:48.568127  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | I0308 04:14:48.568038  960659 retry.go:31] will retry after 4.950017309s: waiting for machine to come up
	I0308 04:14:49.214478  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280808647s)
	I0308 04:14:49.214540  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214551  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214544  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.285990638s)
	I0308 04:14:49.214583  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214597  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214875  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214889  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214898  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214905  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.214923  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.214963  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.214974  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.214982  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.215258  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215287  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215294  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.215566  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.215604  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.215623  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.222132  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.222159  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.222390  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.222407  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301386  959713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.133100514s)
	I0308 04:14:49.301455  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301473  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.301786  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.301805  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.301814  959713 main.go:141] libmachine: Making call to close driver server
	I0308 04:14:49.301819  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.301823  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) Calling .Close
	I0308 04:14:49.302130  959713 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:14:49.302154  959713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:14:49.302165  959713 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-968261"
	I0308 04:14:49.302135  959713 main.go:141] libmachine: (default-k8s-diff-port-968261) DBG | Closing plugin on server side
	I0308 04:14:49.304864  959713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:14:49.846930  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:52.345484  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:49.306195  959713 addons.go:505] duration metric: took 1.681564409s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:14:49.862917  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:51.863135  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:53.522128  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522553  959882 main.go:141] libmachine: (old-k8s-version-496808) Found IP for machine: 192.168.39.3
	I0308 04:14:53.522589  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has current primary IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.522598  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserving static IP address...
	I0308 04:14:53.523084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.523124  959882 main.go:141] libmachine: (old-k8s-version-496808) Reserved static IP address: 192.168.39.3
	I0308 04:14:53.523148  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | skip adding static IP to network mk-old-k8s-version-496808 - found existing host DHCP lease matching {name: "old-k8s-version-496808", mac: "52:54:00:0b:c9:35", ip: "192.168.39.3"}
	I0308 04:14:53.523165  959882 main.go:141] libmachine: (old-k8s-version-496808) Waiting for SSH to be available...
	I0308 04:14:53.523191  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Getting to WaitForSSH function...
	I0308 04:14:53.525546  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.525929  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.525962  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.526084  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH client type: external
	I0308 04:14:53.526111  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa (-rw-------)
	I0308 04:14:53.526143  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:14:53.526159  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | About to run SSH command:
	I0308 04:14:53.526174  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | exit 0
	I0308 04:14:53.653827  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | SSH cmd err, output: <nil>: 
	I0308 04:14:53.654342  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetConfigRaw
	I0308 04:14:53.655143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:53.658362  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.658850  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.658892  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.659106  959882 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/config.json ...
	I0308 04:14:53.659337  959882 machine.go:94] provisionDockerMachine start ...
	I0308 04:14:53.659358  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:53.659581  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.662234  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662671  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.662696  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.662887  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.663068  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.663478  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.663702  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.663968  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.663984  959882 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:14:53.774239  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:14:53.774273  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774566  959882 buildroot.go:166] provisioning hostname "old-k8s-version-496808"
	I0308 04:14:53.774597  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:53.774847  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.777568  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.777934  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.777970  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.778094  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.778297  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778469  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.778626  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.778792  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.779007  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.779027  959882 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-496808 && echo "old-k8s-version-496808" | sudo tee /etc/hostname
	I0308 04:14:53.906030  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-496808
	
	I0308 04:14:53.906067  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:53.909099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909530  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:53.909565  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:53.909733  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:53.909957  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910157  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:53.910320  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:53.910494  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:53.910681  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:53.910698  959882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-496808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-496808/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-496808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:14:54.029343  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:14:54.029401  959882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:14:54.029441  959882 buildroot.go:174] setting up certificates
	I0308 04:14:54.029450  959882 provision.go:84] configureAuth start
	I0308 04:14:54.029462  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetMachineName
	I0308 04:14:54.029743  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.032515  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.032925  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.032972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.033103  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.035621  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036020  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.036047  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.036193  959882 provision.go:143] copyHostCerts
	I0308 04:14:54.036258  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:14:54.036271  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:14:54.036341  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:14:54.036455  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:14:54.036466  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:14:54.036497  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:14:54.036575  959882 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:14:54.036584  959882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:14:54.036611  959882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:14:54.036692  959882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-496808 san=[127.0.0.1 192.168.39.3 localhost minikube old-k8s-version-496808]
	I0308 04:14:54.926895  959302 start.go:364] duration metric: took 1m0.248483539s to acquireMachinesLock for "no-preload-477676"
	I0308 04:14:54.926959  959302 start.go:96] Skipping create...Using existing machine configuration
	I0308 04:14:54.926970  959302 fix.go:54] fixHost starting: 
	I0308 04:14:54.927444  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:14:54.927486  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:14:54.947990  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45261
	I0308 04:14:54.948438  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:14:54.949033  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:14:54.949066  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:14:54.949479  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:14:54.949696  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:14:54.949848  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:14:54.951469  959302 fix.go:112] recreateIfNeeded on no-preload-477676: state=Stopped err=<nil>
	I0308 04:14:54.951492  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	W0308 04:14:54.951632  959302 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 04:14:54.953357  959302 out.go:177] * Restarting existing kvm2 VM for "no-preload-477676" ...
	I0308 04:14:54.199880  959882 provision.go:177] copyRemoteCerts
	I0308 04:14:54.199958  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:14:54.199990  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.202727  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203099  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.203124  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.203374  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.203558  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.203716  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.203903  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.288575  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0308 04:14:54.318968  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 04:14:54.346348  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:14:54.372793  959882 provision.go:87] duration metric: took 343.324409ms to configureAuth
	I0308 04:14:54.372824  959882 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:14:54.373050  959882 config.go:182] Loaded profile config "old-k8s-version-496808": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0308 04:14:54.373143  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.375972  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376329  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.376361  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.376520  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.376711  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.376889  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.377020  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.377155  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.377369  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.377393  959882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:14:54.682289  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:14:54.682326  959882 machine.go:97] duration metric: took 1.022971943s to provisionDockerMachine
	I0308 04:14:54.682341  959882 start.go:293] postStartSetup for "old-k8s-version-496808" (driver="kvm2")
	I0308 04:14:54.682355  959882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:14:54.682378  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.682777  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:14:54.682817  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.686054  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686492  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.686519  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.686703  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.686940  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.687131  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.687288  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.773203  959882 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:14:54.778126  959882 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:14:54.778154  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:14:54.778230  959882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:14:54.778323  959882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:14:54.778449  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:14:54.788838  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:14:54.816895  959882 start.go:296] duration metric: took 134.54064ms for postStartSetup
	I0308 04:14:54.816932  959882 fix.go:56] duration metric: took 24.552538201s for fixHost
	I0308 04:14:54.816954  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.819669  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.820140  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.820242  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.820435  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820630  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.820754  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.820907  959882 main.go:141] libmachine: Using SSH client type: native
	I0308 04:14:54.821105  959882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0308 04:14:54.821120  959882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:14:54.926690  959882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871294.910163930
	
	I0308 04:14:54.926718  959882 fix.go:216] guest clock: 1709871294.910163930
	I0308 04:14:54.926728  959882 fix.go:229] Guest: 2024-03-08 04:14:54.91016393 +0000 UTC Remote: 2024-03-08 04:14:54.816936754 +0000 UTC m=+275.715567131 (delta=93.227176ms)
	I0308 04:14:54.926785  959882 fix.go:200] guest clock delta is within tolerance: 93.227176ms
	I0308 04:14:54.926795  959882 start.go:83] releasing machines lock for "old-k8s-version-496808", held for 24.662440268s
	I0308 04:14:54.926833  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.927124  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:54.930220  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930700  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.930728  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.930919  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931497  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931688  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .DriverName
	I0308 04:14:54.931917  959882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:14:54.931989  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.931923  959882 ssh_runner.go:195] Run: cat /version.json
	I0308 04:14:54.932054  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHHostname
	I0308 04:14:54.935104  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935380  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935554  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935578  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.935723  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.935855  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:54.935886  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.935885  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:54.936079  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHPort
	I0308 04:14:54.936078  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936288  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHKeyPath
	I0308 04:14:54.936347  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:54.936430  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetSSHUsername
	I0308 04:14:54.936573  959882 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/old-k8s-version-496808/id_rsa Username:docker}
	I0308 04:14:55.043162  959882 ssh_runner.go:195] Run: systemctl --version
	I0308 04:14:55.049749  959882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:14:55.201176  959882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:14:55.208313  959882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:14:55.208392  959882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:14:55.226833  959882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:14:55.226860  959882 start.go:494] detecting cgroup driver to use...
	I0308 04:14:55.226938  959882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:14:55.250059  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:14:55.266780  959882 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:14:55.266839  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:14:55.285787  959882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:14:55.303007  959882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:14:55.444073  959882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:14:55.605216  959882 docker.go:233] disabling docker service ...
	I0308 04:14:55.605305  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:14:55.623412  959882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:14:55.637116  959882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:14:55.780621  959882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:14:55.928071  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:14:55.945081  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:14:55.968584  959882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0308 04:14:55.968653  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:55.985540  959882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:14:55.985625  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.000068  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.019434  959882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:14:56.035682  959882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:14:56.055515  959882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:14:56.066248  959882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:14:56.066338  959882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:14:56.082813  959882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:14:56.093567  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:14:56.236190  959882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:14:56.389773  959882 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:14:56.389883  959882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:14:56.396303  959882 start.go:562] Will wait 60s for crictl version
	I0308 04:14:56.396412  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:14:56.400918  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:14:56.441200  959882 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:14:56.441312  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.474650  959882 ssh_runner.go:195] Run: crio --version
	I0308 04:14:56.513682  959882 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0308 04:14:54.954687  959302 main.go:141] libmachine: (no-preload-477676) Calling .Start
	I0308 04:14:54.954868  959302 main.go:141] libmachine: (no-preload-477676) Ensuring networks are active...
	I0308 04:14:54.955716  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network default is active
	I0308 04:14:54.956166  959302 main.go:141] libmachine: (no-preload-477676) Ensuring network mk-no-preload-477676 is active
	I0308 04:14:54.956684  959302 main.go:141] libmachine: (no-preload-477676) Getting domain xml...
	I0308 04:14:54.957357  959302 main.go:141] libmachine: (no-preload-477676) Creating domain...
	I0308 04:14:56.253326  959302 main.go:141] libmachine: (no-preload-477676) Waiting to get IP...
	I0308 04:14:56.254539  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.255046  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.255149  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.255021  960882 retry.go:31] will retry after 249.989758ms: waiting for machine to come up
	I0308 04:14:56.506677  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.507151  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.507182  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.507096  960882 retry.go:31] will retry after 265.705108ms: waiting for machine to come up
	I0308 04:14:56.774690  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:56.775278  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:56.775315  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:56.775223  960882 retry.go:31] will retry after 357.288146ms: waiting for machine to come up
	I0308 04:14:57.133994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.135007  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.135041  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.134974  960882 retry.go:31] will retry after 507.293075ms: waiting for machine to come up
	I0308 04:14:54.843178  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.847580  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:53.864372  959713 node_ready.go:53] node "default-k8s-diff-port-968261" has status "Ready":"False"
	I0308 04:14:55.364572  959713 node_ready.go:49] node "default-k8s-diff-port-968261" has status "Ready":"True"
	I0308 04:14:55.364606  959713 node_ready.go:38] duration metric: took 7.506172855s for node "default-k8s-diff-port-968261" to be "Ready" ...
	I0308 04:14:55.364630  959713 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:14:55.374067  959713 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.379982  959713 pod_ready.go:92] pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.380009  959713 pod_ready.go:81] duration metric: took 5.913005ms for pod "coredns-5dd5756b68-xqqds" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.380020  959713 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385363  959713 pod_ready.go:92] pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:55.385389  959713 pod_ready.go:81] duration metric: took 5.360352ms for pod "etcd-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:55.385400  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:57.397434  959713 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:56.514749  959882 main.go:141] libmachine: (old-k8s-version-496808) Calling .GetIP
	I0308 04:14:56.517431  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.517834  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:c9:35", ip: ""} in network mk-old-k8s-version-496808: {Iface:virbr1 ExpiryTime:2024-03-08 05:14:43 +0000 UTC Type:0 Mac:52:54:00:0b:c9:35 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:old-k8s-version-496808 Clientid:01:52:54:00:0b:c9:35}
	I0308 04:14:56.517861  959882 main.go:141] libmachine: (old-k8s-version-496808) DBG | domain old-k8s-version-496808 has defined IP address 192.168.39.3 and MAC address 52:54:00:0b:c9:35 in network mk-old-k8s-version-496808
	I0308 04:14:56.518087  959882 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0308 04:14:56.523051  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:14:56.537776  959882 kubeadm.go:877] updating cluster {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:14:56.537920  959882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 04:14:56.537985  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:14:56.597725  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:14:56.597806  959882 ssh_runner.go:195] Run: which lz4
	I0308 04:14:56.604041  959882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 04:14:56.610064  959882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 04:14:56.610096  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0308 04:14:58.702256  959882 crio.go:444] duration metric: took 2.098251146s to copy over tarball
	I0308 04:14:58.702363  959882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 04:14:57.644550  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:57.645018  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:57.645047  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:57.644964  960882 retry.go:31] will retry after 513.468978ms: waiting for machine to come up
	I0308 04:14:58.159920  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:58.160530  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:58.160590  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:58.160489  960882 retry.go:31] will retry after 931.323215ms: waiting for machine to come up
	I0308 04:14:59.093597  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.094185  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.094228  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.094138  960882 retry.go:31] will retry after 830.396135ms: waiting for machine to come up
	I0308 04:14:59.925930  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:14:59.926370  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:14:59.926408  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:14:59.926316  960882 retry.go:31] will retry after 1.324869025s: waiting for machine to come up
	I0308 04:15:01.252738  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:01.253246  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:01.253314  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:01.253139  960882 retry.go:31] will retry after 1.692572504s: waiting for machine to come up
	I0308 04:14:59.343942  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:01.346860  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:14:58.396262  959713 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.396292  959713 pod_ready.go:81] duration metric: took 3.010882138s for pod "kube-apiserver-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.396306  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405802  959713 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.405827  959713 pod_ready.go:81] duration metric: took 9.512763ms for pod "kube-controller-manager-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.405842  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416144  959713 pod_ready.go:92] pod "kube-proxy-qpxcp" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.416172  959713 pod_ready.go:81] duration metric: took 10.321457ms for pod "kube-proxy-qpxcp" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.416187  959713 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564939  959713 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace has status "Ready":"True"
	I0308 04:14:58.564968  959713 pod_ready.go:81] duration metric: took 148.772018ms for pod "kube-scheduler-default-k8s-diff-port-968261" in "kube-system" namespace to be "Ready" ...
	I0308 04:14:58.564983  959713 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:00.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.575562  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:02.004116  959882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.301698569s)
	I0308 04:15:02.004162  959882 crio.go:451] duration metric: took 3.301864538s to extract the tarball
	I0308 04:15:02.004174  959882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 04:15:02.052658  959882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:02.095405  959882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0308 04:15:02.095434  959882 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.095624  959882 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.095557  959882 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.095565  959882 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0308 04:15:02.095684  959882 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.095747  959882 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.095551  959882 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097730  959882 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0308 04:15:02.097838  959882 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.097814  959882 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.097724  959882 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.097736  959882 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.098010  959882 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.097914  959882 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.237485  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.240937  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.243494  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.251785  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.252022  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.259248  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.290325  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0308 04:15:02.381595  959882 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0308 04:15:02.381656  959882 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.381714  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.386828  959882 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:02.456504  959882 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0308 04:15:02.456561  959882 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.456615  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.477936  959882 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0308 04:15:02.477999  959882 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.478055  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.489942  959882 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0308 04:15:02.489999  959882 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.490053  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.490105  959882 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0308 04:15:02.490149  959882 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.490199  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512354  959882 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0308 04:15:02.512435  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0308 04:15:02.512452  959882 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0308 04:15:02.512471  959882 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0308 04:15:02.512527  959882 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.512567  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.512491  959882 ssh_runner.go:195] Run: which crictl
	I0308 04:15:02.643770  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0308 04:15:02.643808  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0308 04:15:02.643806  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0308 04:15:02.643868  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0308 04:15:02.643918  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0308 04:15:02.643945  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0308 04:15:02.643949  959882 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0308 04:15:02.798719  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0308 04:15:02.798734  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0308 04:15:02.798821  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0308 04:15:02.799229  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0308 04:15:02.799309  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0308 04:15:02.799333  959882 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0308 04:15:02.799392  959882 cache_images.go:92] duration metric: took 703.946482ms to LoadCachedImages
	W0308 04:15:02.799504  959882 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0308 04:15:02.799524  959882 kubeadm.go:928] updating node { 192.168.39.3 8443 v1.20.0 crio true true} ...
	I0308 04:15:02.799674  959882 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-496808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:02.799746  959882 ssh_runner.go:195] Run: crio config
	I0308 04:15:02.862352  959882 cni.go:84] Creating CNI manager for ""
	I0308 04:15:02.862378  959882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:02.862391  959882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:02.862423  959882 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-496808 NodeName:old-k8s-version-496808 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0308 04:15:02.862637  959882 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-496808"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:02.862709  959882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0308 04:15:02.874570  959882 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:02.874647  959882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:02.886667  959882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0308 04:15:02.906891  959882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 04:15:02.926483  959882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0308 04:15:02.947450  959882 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:02.952145  959882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:02.968125  959882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:03.112315  959882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:03.132476  959882 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808 for IP: 192.168.39.3
	I0308 04:15:03.132504  959882 certs.go:194] generating shared ca certs ...
	I0308 04:15:03.132526  959882 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.132740  959882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:03.132800  959882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:03.132815  959882 certs.go:256] generating profile certs ...
	I0308 04:15:03.132936  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.key
	I0308 04:15:03.133030  959882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key.bb63bcf1
	I0308 04:15:03.133089  959882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key
	I0308 04:15:03.133262  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:03.133332  959882 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:03.133343  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:03.133365  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:03.133394  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:03.133417  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:03.133454  959882 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:03.134168  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:03.166877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:03.199087  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:03.234024  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:03.280877  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0308 04:15:03.328983  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 04:15:03.361009  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:03.396643  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:03.429939  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:03.460472  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:03.491333  959882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:03.522003  959882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:03.544828  959882 ssh_runner.go:195] Run: openssl version
	I0308 04:15:03.553845  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:03.569929  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576488  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.576551  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:03.585133  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:03.601480  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:03.617740  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623126  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.623175  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:03.631748  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:03.644269  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:03.657823  959882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663227  959882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.663298  959882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:03.669857  959882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:03.682480  959882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:03.687954  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:03.694750  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:03.701341  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:03.708001  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:03.714794  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:03.721268  959882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:03.727928  959882 kubeadm.go:391] StartCluster: {Name:old-k8s-version-496808 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-496808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:03.728034  959882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:03.728074  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.770290  959882 cri.go:89] found id: ""
	I0308 04:15:03.770378  959882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:03.782151  959882 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:03.782177  959882 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:03.782182  959882 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:03.782257  959882 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:03.792967  959882 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:03.793989  959882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-496808" does not appear in /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:15:03.794754  959882 kubeconfig.go:62] /home/jenkins/minikube-integration/18333-911675/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-496808" cluster setting kubeconfig missing "old-k8s-version-496808" context setting]
	I0308 04:15:03.796210  959882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:03.798516  959882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:03.808660  959882 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.3
	I0308 04:15:03.808693  959882 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:03.808708  959882 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:03.808762  959882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:03.848616  959882 cri.go:89] found id: ""
	I0308 04:15:03.848701  959882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:03.868260  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:03.883429  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:03.883461  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:03.883518  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:03.895185  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:03.895273  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:03.908307  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:03.919659  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:03.919745  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:03.932051  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.942658  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:03.942723  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:03.953752  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:03.963800  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:03.963862  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:03.974154  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:03.984543  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:04.118984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:02.947619  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:02.948150  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:02.948179  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:02.948080  960882 retry.go:31] will retry after 2.0669035s: waiting for machine to come up
	I0308 04:15:05.016921  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:05.017486  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:05.017520  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:05.017417  960882 retry.go:31] will retry after 1.864987253s: waiting for machine to come up
	I0308 04:15:06.883885  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:06.884364  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:06.884401  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:06.884284  960882 retry.go:31] will retry after 2.982761957s: waiting for machine to come up
	I0308 04:15:03.471304  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.843953  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:05.074410  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:07.573407  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:04.989748  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.264308  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.415419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:05.520516  959882 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:05.520630  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.021020  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:06.521340  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:07.520743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.020918  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:08.521410  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.021039  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:09.870473  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:09.870960  959302 main.go:141] libmachine: (no-preload-477676) DBG | unable to find current IP address of domain no-preload-477676 in network mk-no-preload-477676
	I0308 04:15:09.870987  959302 main.go:141] libmachine: (no-preload-477676) DBG | I0308 04:15:09.870912  960882 retry.go:31] will retry after 4.452291735s: waiting for machine to come up
	I0308 04:15:08.343021  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.344057  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.842593  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:10.073061  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:12.074322  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:09.521388  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.020955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:10.521261  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.021398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:11.521444  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.021054  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:12.520787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.021318  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:13.520679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.020879  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:14.327797  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328248  959302 main.go:141] libmachine: (no-preload-477676) Found IP for machine: 192.168.72.214
	I0308 04:15:14.328275  959302 main.go:141] libmachine: (no-preload-477676) Reserving static IP address...
	I0308 04:15:14.328290  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has current primary IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.328773  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.328820  959302 main.go:141] libmachine: (no-preload-477676) DBG | skip adding static IP to network mk-no-preload-477676 - found existing host DHCP lease matching {name: "no-preload-477676", mac: "52:54:00:3e:6f:03", ip: "192.168.72.214"}
	I0308 04:15:14.328833  959302 main.go:141] libmachine: (no-preload-477676) Reserved static IP address: 192.168.72.214
	I0308 04:15:14.328848  959302 main.go:141] libmachine: (no-preload-477676) Waiting for SSH to be available...
	I0308 04:15:14.328863  959302 main.go:141] libmachine: (no-preload-477676) DBG | Getting to WaitForSSH function...
	I0308 04:15:14.331107  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331485  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.331515  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.331621  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH client type: external
	I0308 04:15:14.331646  959302 main.go:141] libmachine: (no-preload-477676) DBG | Using SSH private key: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa (-rw-------)
	I0308 04:15:14.331689  959302 main.go:141] libmachine: (no-preload-477676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0308 04:15:14.331713  959302 main.go:141] libmachine: (no-preload-477676) DBG | About to run SSH command:
	I0308 04:15:14.331725  959302 main.go:141] libmachine: (no-preload-477676) DBG | exit 0
	I0308 04:15:14.453418  959302 main.go:141] libmachine: (no-preload-477676) DBG | SSH cmd err, output: <nil>: 
	I0308 04:15:14.453775  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetConfigRaw
	I0308 04:15:14.454486  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.457198  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457600  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.457632  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.457885  959302 profile.go:142] Saving config to /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/config.json ...
	I0308 04:15:14.458055  959302 machine.go:94] provisionDockerMachine start ...
	I0308 04:15:14.458072  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:14.458324  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.460692  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461022  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.461048  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.461193  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.461377  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461543  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.461665  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.461819  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.461989  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.462001  959302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 04:15:14.570299  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 04:15:14.570330  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570615  959302 buildroot.go:166] provisioning hostname "no-preload-477676"
	I0308 04:15:14.570641  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.570804  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.573631  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574079  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.574117  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.574318  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.574501  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574633  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.574833  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.575030  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.575265  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.575290  959302 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-477676 && echo "no-preload-477676" | sudo tee /etc/hostname
	I0308 04:15:14.695601  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-477676
	
	I0308 04:15:14.695657  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.698532  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.698857  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.698896  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.699040  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.699231  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699379  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.699533  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.699747  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:14.699916  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:14.699932  959302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-477676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-477676/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-477676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 04:15:14.810780  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 04:15:14.810812  959302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18333-911675/.minikube CaCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18333-911675/.minikube}
	I0308 04:15:14.810836  959302 buildroot.go:174] setting up certificates
	I0308 04:15:14.810848  959302 provision.go:84] configureAuth start
	I0308 04:15:14.810862  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetMachineName
	I0308 04:15:14.811199  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:14.813825  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814306  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.814338  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.814475  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.816617  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.816974  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.816994  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.817106  959302 provision.go:143] copyHostCerts
	I0308 04:15:14.817174  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem, removing ...
	I0308 04:15:14.817187  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem
	I0308 04:15:14.817239  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/ca.pem (1082 bytes)
	I0308 04:15:14.817374  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem, removing ...
	I0308 04:15:14.817388  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem
	I0308 04:15:14.817410  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/cert.pem (1123 bytes)
	I0308 04:15:14.817471  959302 exec_runner.go:144] found /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem, removing ...
	I0308 04:15:14.817477  959302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem
	I0308 04:15:14.817495  959302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18333-911675/.minikube/key.pem (1679 bytes)
	I0308 04:15:14.817542  959302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem org=jenkins.no-preload-477676 san=[127.0.0.1 192.168.72.214 localhost minikube no-preload-477676]
	I0308 04:15:14.906936  959302 provision.go:177] copyRemoteCerts
	I0308 04:15:14.906998  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 04:15:14.907021  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:14.909657  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910006  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:14.910075  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:14.910187  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:14.910387  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:14.910548  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:14.910716  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:14.992469  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0308 04:15:15.021915  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0308 04:15:15.050903  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 04:15:15.079323  959302 provision.go:87] duration metric: took 268.462015ms to configureAuth
	I0308 04:15:15.079349  959302 buildroot.go:189] setting minikube options for container-runtime
	I0308 04:15:15.079515  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:15:15.079597  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.082357  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082736  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.082764  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.082943  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.083159  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083380  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.083544  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.083684  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.083861  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.083876  959302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0308 04:15:15.373423  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0308 04:15:15.373512  959302 machine.go:97] duration metric: took 915.441818ms to provisionDockerMachine
	I0308 04:15:15.373539  959302 start.go:293] postStartSetup for "no-preload-477676" (driver="kvm2")
	I0308 04:15:15.373564  959302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 04:15:15.373589  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.373983  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 04:15:15.374016  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.376726  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377105  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.377136  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.377355  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.377561  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.377765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.377937  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.460690  959302 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 04:15:15.465896  959302 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 04:15:15.465920  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/addons for local assets ...
	I0308 04:15:15.466007  959302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18333-911675/.minikube/files for local assets ...
	I0308 04:15:15.466121  959302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem -> 9189882.pem in /etc/ssl/certs
	I0308 04:15:15.466238  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 04:15:15.476917  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:15.503704  959302 start.go:296] duration metric: took 130.146106ms for postStartSetup
	I0308 04:15:15.503743  959302 fix.go:56] duration metric: took 20.576770563s for fixHost
	I0308 04:15:15.503765  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.506596  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.506937  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.506974  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.507161  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.507384  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507556  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.507708  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.507905  959302 main.go:141] libmachine: Using SSH client type: native
	I0308 04:15:15.508114  959302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.214 22 <nil> <nil>}
	I0308 04:15:15.508128  959302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 04:15:15.610454  959302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709871315.587103178
	
	I0308 04:15:15.610480  959302 fix.go:216] guest clock: 1709871315.587103178
	I0308 04:15:15.610491  959302 fix.go:229] Guest: 2024-03-08 04:15:15.587103178 +0000 UTC Remote: 2024-03-08 04:15:15.503747265 +0000 UTC m=+363.413677430 (delta=83.355913ms)
	I0308 04:15:15.610544  959302 fix.go:200] guest clock delta is within tolerance: 83.355913ms
	I0308 04:15:15.610553  959302 start.go:83] releasing machines lock for "no-preload-477676", held for 20.683624892s
	I0308 04:15:15.610582  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.610877  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:15.613605  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.613993  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.614019  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.614158  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614637  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614778  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:15:15.614926  959302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 04:15:15.614996  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.615007  959302 ssh_runner.go:195] Run: cat /version.json
	I0308 04:15:15.615034  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:15:15.617886  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618108  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618294  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618326  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618484  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618611  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:15.618644  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:15.618648  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.618815  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.618898  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:15:15.618969  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.619060  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:15:15.619197  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:15:15.619369  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:15:15.718256  959302 ssh_runner.go:195] Run: systemctl --version
	I0308 04:15:15.724701  959302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0308 04:15:15.881101  959302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 04:15:15.888808  959302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 04:15:15.888878  959302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 04:15:15.906424  959302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 04:15:15.906446  959302 start.go:494] detecting cgroup driver to use...
	I0308 04:15:15.906521  959302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 04:15:15.922844  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 04:15:15.937540  959302 docker.go:217] disabling cri-docker service (if available) ...
	I0308 04:15:15.937603  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0308 04:15:15.953400  959302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0308 04:15:15.969141  959302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0308 04:15:16.092655  959302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0308 04:15:16.282954  959302 docker.go:233] disabling docker service ...
	I0308 04:15:16.283024  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0308 04:15:16.300403  959302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0308 04:15:16.314146  959302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0308 04:15:16.462031  959302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0308 04:15:16.593289  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0308 04:15:16.608616  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 04:15:16.631960  959302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0308 04:15:16.632030  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.643095  959302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0308 04:15:16.643166  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.654958  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.666663  959302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0308 04:15:16.678059  959302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 04:15:16.689809  959302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 04:15:16.699444  959302 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0308 04:15:16.699490  959302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0308 04:15:16.713397  959302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 04:15:16.723138  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:16.858473  959302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0308 04:15:17.019334  959302 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0308 04:15:17.019406  959302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0308 04:15:17.025473  959302 start.go:562] Will wait 60s for crictl version
	I0308 04:15:17.025545  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.030204  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 04:15:17.073385  959302 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0308 04:15:17.073478  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.113397  959302 ssh_runner.go:195] Run: crio --version
	I0308 04:15:17.146967  959302 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0308 04:15:14.844333  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.844508  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.573567  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:16.573621  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:14.520895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.020983  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:15.521372  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.021342  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:16.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.021103  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.521455  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.020923  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:18.521552  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:19.021411  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:17.148545  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetIP
	I0308 04:15:17.151594  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.151953  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:15:17.151985  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:15:17.152208  959302 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0308 04:15:17.157417  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:17.172940  959302 kubeadm.go:877] updating cluster {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 04:15:17.173084  959302 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0308 04:15:17.173139  959302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0308 04:15:17.214336  959302 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0308 04:15:17.214362  959302 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.214472  959302 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0308 04:15:17.214482  959302 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.214497  959302 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.214444  959302 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.214579  959302 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.214445  959302 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.214464  959302 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.215905  959302 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.216029  959302 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.216055  959302 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.216075  959302 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0308 04:15:17.216085  959302 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.216115  959302 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.216158  959302 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.216220  959302 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.359317  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.360207  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.360520  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.362706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0308 04:15:17.371819  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.373706  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.409909  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.489525  959302 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.522661  959302 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0308 04:15:17.522705  959302 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.522764  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552818  959302 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0308 04:15:17.552880  959302 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.552825  959302 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0308 04:15:17.552930  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.552950  959302 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.553007  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631165  959302 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0308 04:15:17.631223  959302 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.631248  959302 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0308 04:15:17.631269  959302 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0308 04:15:17.631285  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.631293  959302 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631350  959302 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0308 04:15:17.631334  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631388  959302 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.631398  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0308 04:15:17.631421  959302 ssh_runner.go:195] Run: which crictl
	I0308 04:15:17.631441  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0308 04:15:17.631467  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0308 04:15:17.646585  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0308 04:15:17.738655  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0308 04:15:17.738735  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0308 04:15:17.738755  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.738787  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:17.738839  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.742558  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742630  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:17.742641  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0308 04:15:17.742681  959302 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:15:17.742727  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.742810  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:17.823089  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823121  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823126  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0308 04:15:17.823159  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0308 04:15:17.823178  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0308 04:15:17.823220  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823260  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:17.823284  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0308 04:15:17.823313  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:17.823335  959302 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0308 04:15:17.823404  959302 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:17.823407  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797490  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.974049847s)
	I0308 04:15:19.797540  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0308 04:15:19.797656  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.974455198s)
	I0308 04:15:19.797692  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0308 04:15:19.797707  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.974428531s)
	I0308 04:15:19.797719  959302 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.797722  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0308 04:15:19.797746  959302 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (1.974415299s)
	I0308 04:15:19.797777  959302 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0308 04:15:19.797787  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0308 04:15:19.346412  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.842838  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.073682  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:21.574176  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:19.521333  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.020734  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:20.521223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.020864  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:21.521628  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.021104  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:22.520694  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.021760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.521617  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:24.021683  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:23.775954  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.978139318s)
	I0308 04:15:23.775982  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0308 04:15:23.776013  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:23.776058  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0308 04:15:26.238719  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462629438s)
	I0308 04:15:26.238763  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0308 04:15:26.238804  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:26.238873  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0308 04:15:23.843947  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.343028  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.076974  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:26.573300  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:24.520845  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.021100  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:25.521486  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.021664  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:26.521391  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.021559  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:27.521029  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.021676  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.521123  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:29.021235  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:28.403851  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.164936468s)
	I0308 04:15:28.403888  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0308 04:15:28.403919  959302 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:28.403985  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0308 04:15:29.171135  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0308 04:15:29.171184  959302 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:29.171245  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0308 04:15:31.259413  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.0881301s)
	I0308 04:15:31.259465  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0308 04:15:31.259493  959302 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:31.259554  959302 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0308 04:15:28.344422  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:30.841335  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:32.842497  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.075031  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:31.572262  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:29.521163  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.020811  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:30.521619  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.021533  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:31.521102  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.021115  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:32.521400  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.021556  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:34.021218  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:33.936988  959302 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.677402747s)
	I0308 04:15:33.937025  959302 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18333-911675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0308 04:15:33.937058  959302 cache_images.go:123] Successfully loaded all cached images
	I0308 04:15:33.937065  959302 cache_images.go:92] duration metric: took 16.722690124s to LoadCachedImages
	I0308 04:15:33.937081  959302 kubeadm.go:928] updating node { 192.168.72.214 8443 v1.29.0-rc.2 crio true true} ...
	I0308 04:15:33.937211  959302 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-477676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 04:15:33.937310  959302 ssh_runner.go:195] Run: crio config
	I0308 04:15:33.996159  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:33.996184  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:33.996196  959302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 04:15:33.996219  959302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.214 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-477676 NodeName:no-preload-477676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 04:15:33.996372  959302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-477676"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 04:15:33.996434  959302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0308 04:15:34.009629  959302 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 04:15:34.009716  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 04:15:34.021033  959302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0308 04:15:34.041857  959302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0308 04:15:34.060782  959302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0308 04:15:34.080120  959302 ssh_runner.go:195] Run: grep 192.168.72.214	control-plane.minikube.internal$ /etc/hosts
	I0308 04:15:34.084532  959302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 04:15:34.098599  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:15:34.235577  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:15:34.255304  959302 certs.go:68] Setting up /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676 for IP: 192.168.72.214
	I0308 04:15:34.255329  959302 certs.go:194] generating shared ca certs ...
	I0308 04:15:34.255346  959302 certs.go:226] acquiring lock for ca certs: {Name:mkfae87099c574fdada8a9cfe1c1bc4501d8767b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:15:34.255551  959302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key
	I0308 04:15:34.255607  959302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key
	I0308 04:15:34.255622  959302 certs.go:256] generating profile certs ...
	I0308 04:15:34.255735  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.key
	I0308 04:15:34.255819  959302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key.8bd4914f
	I0308 04:15:34.255875  959302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key
	I0308 04:15:34.256039  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem (1338 bytes)
	W0308 04:15:34.256080  959302 certs.go:480] ignoring /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988_empty.pem, impossibly tiny 0 bytes
	I0308 04:15:34.256090  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca-key.pem (1679 bytes)
	I0308 04:15:34.256125  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/ca.pem (1082 bytes)
	I0308 04:15:34.256156  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/cert.pem (1123 bytes)
	I0308 04:15:34.256190  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/certs/key.pem (1679 bytes)
	I0308 04:15:34.256245  959302 certs.go:484] found cert: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem (1708 bytes)
	I0308 04:15:34.257031  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 04:15:34.285001  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0308 04:15:34.333466  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 04:15:34.374113  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0308 04:15:34.419280  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 04:15:34.456977  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 04:15:34.498846  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 04:15:34.525404  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 04:15:34.553453  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/certs/918988.pem --> /usr/share/ca-certificates/918988.pem (1338 bytes)
	I0308 04:15:34.581366  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/ssl/certs/9189882.pem --> /usr/share/ca-certificates/9189882.pem (1708 bytes)
	I0308 04:15:34.608858  959302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 04:15:34.633936  959302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 04:15:34.652523  959302 ssh_runner.go:195] Run: openssl version
	I0308 04:15:34.658923  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9189882.pem && ln -fs /usr/share/ca-certificates/9189882.pem /etc/ssl/certs/9189882.pem"
	I0308 04:15:34.670388  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675889  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  8 03:05 /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.675940  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9189882.pem
	I0308 04:15:34.682421  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9189882.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 04:15:34.693522  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 04:15:34.704515  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709398  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  8 02:56 /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.709447  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 04:15:34.715474  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 04:15:34.727451  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/918988.pem && ln -fs /usr/share/ca-certificates/918988.pem /etc/ssl/certs/918988.pem"
	I0308 04:15:34.739229  959302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744785  959302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  8 03:05 /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.744842  959302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/918988.pem
	I0308 04:15:34.751149  959302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/918988.pem /etc/ssl/certs/51391683.0"
	I0308 04:15:34.762570  959302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 04:15:34.767723  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 04:15:34.774194  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 04:15:34.780278  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 04:15:34.786593  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 04:15:34.792539  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 04:15:34.798963  959302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 04:15:34.805364  959302 kubeadm.go:391] StartCluster: {Name:no-preload-477676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-477676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 04:15:34.805481  959302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0308 04:15:34.805570  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.849977  959302 cri.go:89] found id: ""
	I0308 04:15:34.850077  959302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 04:15:34.861241  959302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 04:15:34.861258  959302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 04:15:34.861263  959302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 04:15:34.861334  959302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 04:15:34.871952  959302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 04:15:34.873167  959302 kubeconfig.go:125] found "no-preload-477676" server: "https://192.168.72.214:8443"
	I0308 04:15:34.875655  959302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 04:15:34.885214  959302 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.214
	I0308 04:15:34.885242  959302 kubeadm.go:1153] stopping kube-system containers ...
	I0308 04:15:34.885255  959302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0308 04:15:34.885314  959302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0308 04:15:34.930201  959302 cri.go:89] found id: ""
	I0308 04:15:34.930326  959302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 04:15:34.949591  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:15:34.960258  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:15:34.960286  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:15:34.960342  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:15:34.972977  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:15:34.973043  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:15:34.983451  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:15:34.993165  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:15:34.993240  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:15:35.004246  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.014250  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:15:35.014324  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:15:35.025852  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:15:35.039040  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:15:35.039097  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:15:35.049250  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:15:35.060032  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:35.194250  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.562641  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.368344142s)
	I0308 04:15:36.562682  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.790359  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.882406  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:36.996837  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:15:36.996932  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.342226  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:37.342421  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:33.585549  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:36.073057  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:38.073735  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:34.521153  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.021674  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:35.521167  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.021527  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:36.521735  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.021724  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.521610  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.020679  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.521077  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:39.020793  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.497785  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:37.997698  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:38.108966  959302 api_server.go:72] duration metric: took 1.112127399s to wait for apiserver process to appear ...
	I0308 04:15:38.109001  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:15:38.109026  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.834090  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.834134  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:40.834155  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:40.871188  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 04:15:40.871218  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 04:15:41.109620  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.117933  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.117963  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:41.609484  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:41.614544  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 04:15:41.614597  959302 api_server.go:103] status: https://192.168.72.214:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 04:15:42.109111  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:15:42.115430  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:15:42.123631  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:15:42.123658  959302 api_server.go:131] duration metric: took 4.014647782s to wait for apiserver health ...
	I0308 04:15:42.123669  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:15:42.123678  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:15:42.125139  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:15:42.126405  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:15:39.844696  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.343356  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:40.573896  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:42.577779  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:39.521370  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.020791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:40.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.020899  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:41.521416  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.021787  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.520835  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.021353  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:43.521314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:44.021373  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:42.145424  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:15:42.167256  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:15:42.176365  959302 system_pods.go:59] 8 kube-system pods found
	I0308 04:15:42.176401  959302 system_pods.go:61] "coredns-76f75df574-g4vhz" [e268377d-e708-4079-a3a6-da6602451acd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:15:42.176411  959302 system_pods.go:61] "etcd-no-preload-477676" [64bd2174-4a2d-4d22-a29f-01c0fdf72479] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 04:15:42.176420  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [5fadbfc6-8111-4ea8-a4c1-74b21c8791e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 04:15:42.176428  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ffdd9475-79f4-4dd0-b8fb-5a5725637df9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 04:15:42.176441  959302 system_pods.go:61] "kube-proxy-v42lx" [e9377c3f-8faf-42f5-9c89-7ef5cb5cd0c7] Running
	I0308 04:15:42.176452  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [aab5776a-147c-4382-a1b1-d1b89a1507fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 04:15:42.176464  959302 system_pods.go:61] "metrics-server-57f55c9bc5-6nb8p" [8d60a006-ee39-44e5-8484-20052c0e1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:15:42.176471  959302 system_pods.go:61] "storage-provisioner" [4ad21d02-7a1c-4581-b090-0428f2a8419e] Running
	I0308 04:15:42.176492  959302 system_pods.go:74] duration metric: took 9.206529ms to wait for pod list to return data ...
	I0308 04:15:42.176503  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:15:42.179350  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:15:42.179386  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:15:42.179402  959302 node_conditions.go:105] duration metric: took 2.889762ms to run NodePressure ...
	I0308 04:15:42.179427  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 04:15:42.466143  959302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470917  959302 kubeadm.go:733] kubelet initialised
	I0308 04:15:42.470937  959302 kubeadm.go:734] duration metric: took 4.756658ms waiting for restarted kubelet to initialise ...
	I0308 04:15:42.470945  959302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:15:42.477659  959302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.484070  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484098  959302 pod_ready.go:81] duration metric: took 6.415355ms for pod "coredns-76f75df574-g4vhz" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.484109  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "coredns-76f75df574-g4vhz" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.484117  959302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.490702  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490727  959302 pod_ready.go:81] duration metric: took 6.600271ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.490738  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "etcd-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.490745  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:42.498382  959302 pod_ready.go:97] node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498413  959302 pod_ready.go:81] duration metric: took 7.656661ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	E0308 04:15:42.498422  959302 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-477676" hosting pod "kube-apiserver-no-preload-477676" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-477676" has status "Ready":"False"
	I0308 04:15:42.498427  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:44.506155  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.006183  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.843916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.343562  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:45.072980  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:47.073386  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:44.521379  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.021201  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:45.521457  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.021361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:46.521013  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.020951  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:47.520779  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.020743  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:48.520821  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.020672  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:49.010147  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.505560  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.842861  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.844183  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.572190  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:51.573316  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:49.521335  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.020660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:50.520769  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.021030  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:51.521598  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.021223  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:52.521596  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.021714  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.520791  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:54.021534  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:53.508119  959302 pod_ready.go:102] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.007107  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.007143  959302 pod_ready.go:81] duration metric: took 12.508705772s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.007160  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016518  959302 pod_ready.go:92] pod "kube-proxy-v42lx" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:55.016541  959302 pod_ready.go:81] duration metric: took 9.36637ms for pod "kube-proxy-v42lx" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:55.016550  959302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022857  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:15:57.022884  959302 pod_ready.go:81] duration metric: took 2.00632655s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:57.022893  959302 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	I0308 04:15:54.342852  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:56.344006  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:53.574097  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:55.574423  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.072115  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:54.521371  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.021483  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:55.521415  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.021310  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:56.521320  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.020895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:57.521480  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.020975  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:58.520824  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.021614  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:15:59.032804  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.032992  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:58.845650  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:01.342691  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:00.072688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:02.072846  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:15:59.520873  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.021575  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:00.520830  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.021080  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:01.521407  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.020766  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:02.521574  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.020954  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.521306  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:04.021677  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:03.531689  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:06.029510  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:03.342901  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:05.343954  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.851550  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.573106  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:07.071375  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:04.521706  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.021169  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:05.520878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:05.520964  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:05.568132  959882 cri.go:89] found id: ""
	I0308 04:16:05.568159  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.568171  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:05.568180  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:05.568266  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:05.612975  959882 cri.go:89] found id: ""
	I0308 04:16:05.613005  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.613014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:05.613020  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:05.613082  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:05.658018  959882 cri.go:89] found id: ""
	I0308 04:16:05.658053  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.658065  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:05.658073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:05.658141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:05.705190  959882 cri.go:89] found id: ""
	I0308 04:16:05.705219  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.705230  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:05.705238  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:05.705325  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:05.746869  959882 cri.go:89] found id: ""
	I0308 04:16:05.746900  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.746911  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:05.746920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:05.746976  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:05.790808  959882 cri.go:89] found id: ""
	I0308 04:16:05.790838  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.790849  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:05.790858  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:05.790920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:05.841141  959882 cri.go:89] found id: ""
	I0308 04:16:05.841170  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.841179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:05.841187  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:05.841256  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:05.883811  959882 cri.go:89] found id: ""
	I0308 04:16:05.883874  959882 logs.go:276] 0 containers: []
	W0308 04:16:05.883885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:05.883900  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:05.883916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:05.941801  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:05.941834  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:05.956062  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:05.956088  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:06.085575  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:06.085619  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:06.085634  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:06.155477  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:06.155512  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.704955  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:08.720108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:08.720176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:08.759487  959882 cri.go:89] found id: ""
	I0308 04:16:08.759514  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.759522  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:08.759529  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:08.759579  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:08.800149  959882 cri.go:89] found id: ""
	I0308 04:16:08.800177  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.800188  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:08.800216  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:08.800290  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:08.837825  959882 cri.go:89] found id: ""
	I0308 04:16:08.837856  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.837867  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:08.837874  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:08.837938  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:08.881296  959882 cri.go:89] found id: ""
	I0308 04:16:08.881326  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.881338  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:08.881345  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:08.881432  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:08.920238  959882 cri.go:89] found id: ""
	I0308 04:16:08.920267  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.920279  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:08.920287  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:08.920338  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:08.960380  959882 cri.go:89] found id: ""
	I0308 04:16:08.960408  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.960417  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:08.960423  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:08.960506  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:08.999049  959882 cri.go:89] found id: ""
	I0308 04:16:08.999074  959882 logs.go:276] 0 containers: []
	W0308 04:16:08.999082  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:08.999087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:08.999139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:09.075782  959882 cri.go:89] found id: ""
	I0308 04:16:09.075809  959882 logs.go:276] 0 containers: []
	W0308 04:16:09.075820  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:09.075831  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:09.075868  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:09.146238  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:09.146278  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:08.031651  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.529752  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:10.343135  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:12.345054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.073688  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:11.574266  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:09.191255  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:09.191289  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:09.243958  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:09.243996  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:09.260980  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:09.261011  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:09.341479  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:11.842466  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:11.856326  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:11.856393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:11.897853  959882 cri.go:89] found id: ""
	I0308 04:16:11.897885  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.897897  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:11.897904  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:11.897978  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:11.937344  959882 cri.go:89] found id: ""
	I0308 04:16:11.937369  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.937378  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:11.937384  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:11.937440  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:11.978201  959882 cri.go:89] found id: ""
	I0308 04:16:11.978226  959882 logs.go:276] 0 containers: []
	W0308 04:16:11.978236  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:11.978244  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:11.978301  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:12.018823  959882 cri.go:89] found id: ""
	I0308 04:16:12.018850  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.018860  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:12.018866  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:12.018920  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:12.058477  959882 cri.go:89] found id: ""
	I0308 04:16:12.058511  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.058523  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:12.058531  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:12.058602  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:12.098867  959882 cri.go:89] found id: ""
	I0308 04:16:12.098897  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.098908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:12.098916  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:12.098981  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:12.137615  959882 cri.go:89] found id: ""
	I0308 04:16:12.137647  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.137658  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:12.137667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:12.137737  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:12.174098  959882 cri.go:89] found id: ""
	I0308 04:16:12.174127  959882 logs.go:276] 0 containers: []
	W0308 04:16:12.174139  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:12.174152  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:12.174169  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:12.261481  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:12.261509  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:12.261527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:12.357271  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:12.357313  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:12.409879  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:12.409916  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:12.461594  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:12.461635  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:13.033236  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:15.530721  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.842647  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:17.341950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.072869  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:16.073201  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:18.073655  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:14.979772  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:14.993986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:14.994056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:15.049380  959882 cri.go:89] found id: ""
	I0308 04:16:15.049402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.049410  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:15.049416  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:15.049472  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:15.087605  959882 cri.go:89] found id: ""
	I0308 04:16:15.087628  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.087636  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:15.087643  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:15.087716  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:15.126378  959882 cri.go:89] found id: ""
	I0308 04:16:15.126402  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.126411  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:15.126419  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:15.126484  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:15.161737  959882 cri.go:89] found id: ""
	I0308 04:16:15.161776  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.161784  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:15.161790  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:15.161841  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:15.198650  959882 cri.go:89] found id: ""
	I0308 04:16:15.198684  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.198696  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:15.198704  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:15.198787  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:15.237177  959882 cri.go:89] found id: ""
	I0308 04:16:15.237207  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.237216  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:15.237222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:15.237289  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:15.275736  959882 cri.go:89] found id: ""
	I0308 04:16:15.275761  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.275772  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:15.275780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:15.275848  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:15.319610  959882 cri.go:89] found id: ""
	I0308 04:16:15.319642  959882 logs.go:276] 0 containers: []
	W0308 04:16:15.319654  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:15.319667  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:15.319686  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:15.401999  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:15.402027  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:15.402044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:15.489207  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:15.489253  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:15.540182  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:15.540216  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:15.592496  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:15.592533  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.108248  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:18.122714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:18.122795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:18.159829  959882 cri.go:89] found id: ""
	I0308 04:16:18.159855  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.159862  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:18.159868  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:18.159923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:18.197862  959882 cri.go:89] found id: ""
	I0308 04:16:18.197898  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.197910  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:18.197919  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:18.197980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:18.234709  959882 cri.go:89] found id: ""
	I0308 04:16:18.234739  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.234751  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:18.234759  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:18.234825  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:18.271856  959882 cri.go:89] found id: ""
	I0308 04:16:18.271881  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.271890  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:18.271897  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:18.271962  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:18.316805  959882 cri.go:89] found id: ""
	I0308 04:16:18.316862  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.316876  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:18.316884  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:18.316954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:18.352936  959882 cri.go:89] found id: ""
	I0308 04:16:18.352967  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.352978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:18.352987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:18.353053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:18.392207  959882 cri.go:89] found id: ""
	I0308 04:16:18.392235  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.392244  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:18.392253  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:18.392321  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:18.430890  959882 cri.go:89] found id: ""
	I0308 04:16:18.430919  959882 logs.go:276] 0 containers: []
	W0308 04:16:18.430930  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:18.430944  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:18.430959  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:18.516371  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:18.516399  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:18.516419  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:18.603462  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:18.603498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:18.648246  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:18.648286  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:18.707255  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:18.707292  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:18.029307  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.029909  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:19.344795  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.842652  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:20.573003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:23.075493  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:21.225019  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:21.239824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:21.239899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:21.281114  959882 cri.go:89] found id: ""
	I0308 04:16:21.281142  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.281152  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:21.281159  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:21.281230  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:21.321346  959882 cri.go:89] found id: ""
	I0308 04:16:21.321375  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.321384  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:21.321391  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:21.321456  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:21.365699  959882 cri.go:89] found id: ""
	I0308 04:16:21.365721  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.365729  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:21.365736  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:21.365792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:21.418990  959882 cri.go:89] found id: ""
	I0308 04:16:21.419019  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.419031  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:21.419040  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:21.419103  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:21.498706  959882 cri.go:89] found id: ""
	I0308 04:16:21.498735  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.498766  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:21.498774  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:21.498842  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:21.539861  959882 cri.go:89] found id: ""
	I0308 04:16:21.539881  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.539889  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:21.539896  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:21.539946  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:21.577350  959882 cri.go:89] found id: ""
	I0308 04:16:21.577373  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.577381  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:21.577386  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:21.577434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:21.619415  959882 cri.go:89] found id: ""
	I0308 04:16:21.619443  959882 logs.go:276] 0 containers: []
	W0308 04:16:21.619452  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:21.619462  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:21.619476  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:21.696226  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:21.696246  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:21.696260  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:21.776457  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:21.776498  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:21.821495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:21.821534  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:21.875110  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:21.875141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:22.530757  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.531453  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:27.030221  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.341748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:26.343268  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:25.575923  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.072981  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:24.392128  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:24.409152  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:24.409237  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:24.453549  959882 cri.go:89] found id: ""
	I0308 04:16:24.453574  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.453583  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:24.453588  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:24.453639  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:24.489544  959882 cri.go:89] found id: ""
	I0308 04:16:24.489573  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.489582  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:24.489589  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:24.489641  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:24.530237  959882 cri.go:89] found id: ""
	I0308 04:16:24.530291  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.530307  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:24.530316  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:24.530379  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:24.569740  959882 cri.go:89] found id: ""
	I0308 04:16:24.569770  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.569782  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:24.569792  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:24.569868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:24.615782  959882 cri.go:89] found id: ""
	I0308 04:16:24.615814  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.615824  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:24.615830  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:24.615891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:24.660466  959882 cri.go:89] found id: ""
	I0308 04:16:24.660501  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.660514  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:24.660522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:24.660592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:24.699557  959882 cri.go:89] found id: ""
	I0308 04:16:24.699584  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.699593  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:24.699599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:24.699656  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:24.739180  959882 cri.go:89] found id: ""
	I0308 04:16:24.739212  959882 logs.go:276] 0 containers: []
	W0308 04:16:24.739223  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:24.739239  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:24.739255  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:24.792962  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:24.792994  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:24.807519  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:24.807547  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:24.883176  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:24.883202  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:24.883219  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:24.965867  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:24.965907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.524895  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:27.540579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:27.540678  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:27.580704  959882 cri.go:89] found id: ""
	I0308 04:16:27.580734  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.580744  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:27.580751  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:27.580814  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:27.620492  959882 cri.go:89] found id: ""
	I0308 04:16:27.620526  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.620538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:27.620547  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:27.620623  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:27.658429  959882 cri.go:89] found id: ""
	I0308 04:16:27.658464  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.658478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:27.658488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:27.658557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:27.696661  959882 cri.go:89] found id: ""
	I0308 04:16:27.696693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.696706  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:27.696714  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:27.696783  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:27.732352  959882 cri.go:89] found id: ""
	I0308 04:16:27.732382  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.732391  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:27.732397  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:27.732462  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:27.768328  959882 cri.go:89] found id: ""
	I0308 04:16:27.768357  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.768368  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:27.768377  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:27.768443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:27.802663  959882 cri.go:89] found id: ""
	I0308 04:16:27.802693  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.802704  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:27.802712  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:27.802778  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:27.840134  959882 cri.go:89] found id: ""
	I0308 04:16:27.840161  959882 logs.go:276] 0 containers: []
	W0308 04:16:27.840177  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:27.840191  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:27.840206  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:27.924259  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:27.924296  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:27.969694  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:27.969738  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:28.025588  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:28.025620  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:28.042332  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:28.042363  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:28.124389  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:29.037433  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:31.043629  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:28.841924  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.844031  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.571436  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:32.574800  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:30.624800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:30.641942  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:30.642013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:30.685012  959882 cri.go:89] found id: ""
	I0308 04:16:30.685043  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.685053  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:30.685060  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:30.685131  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:30.722769  959882 cri.go:89] found id: ""
	I0308 04:16:30.722799  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.722807  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:30.722813  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:30.722865  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:30.760831  959882 cri.go:89] found id: ""
	I0308 04:16:30.760913  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.760929  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:30.760938  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:30.761009  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:30.799793  959882 cri.go:89] found id: ""
	I0308 04:16:30.799823  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.799836  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:30.799844  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:30.799982  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:30.838444  959882 cri.go:89] found id: ""
	I0308 04:16:30.838478  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.838488  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:30.838497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:30.838559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:30.880170  959882 cri.go:89] found id: ""
	I0308 04:16:30.880215  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.880225  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:30.880232  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:30.880293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:30.922370  959882 cri.go:89] found id: ""
	I0308 04:16:30.922397  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.922407  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:30.922412  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:30.922482  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:30.961759  959882 cri.go:89] found id: ""
	I0308 04:16:30.961793  959882 logs.go:276] 0 containers: []
	W0308 04:16:30.961810  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:30.961821  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:30.961854  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:31.015993  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:31.016029  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:31.032098  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:31.032135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:31.110402  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:31.110428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:31.110447  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:31.193942  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:31.193982  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:33.743809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:33.760087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:33.760154  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:33.799990  959882 cri.go:89] found id: ""
	I0308 04:16:33.800018  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.800028  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:33.800035  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:33.800098  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:33.839935  959882 cri.go:89] found id: ""
	I0308 04:16:33.839959  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.839968  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:33.839975  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:33.840029  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:33.879821  959882 cri.go:89] found id: ""
	I0308 04:16:33.879852  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.879863  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:33.879871  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:33.879974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:33.920087  959882 cri.go:89] found id: ""
	I0308 04:16:33.920115  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.920123  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:33.920129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:33.920186  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:33.962302  959882 cri.go:89] found id: ""
	I0308 04:16:33.962331  959882 logs.go:276] 0 containers: []
	W0308 04:16:33.962342  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:33.962351  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:33.962415  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:34.001578  959882 cri.go:89] found id: ""
	I0308 04:16:34.001613  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.001625  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:34.001634  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:34.001703  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:34.045744  959882 cri.go:89] found id: ""
	I0308 04:16:34.045765  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.045774  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:34.045779  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:34.045830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:34.087677  959882 cri.go:89] found id: ""
	I0308 04:16:34.087704  959882 logs.go:276] 0 containers: []
	W0308 04:16:34.087712  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:34.087726  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:34.087743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:34.103841  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:34.103871  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:16:33.530731  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:36.029806  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:33.342367  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.841477  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.842082  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:35.072609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:37.077159  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	W0308 04:16:34.180627  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:34.180655  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:34.180674  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:34.269958  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:34.269997  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:34.314599  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:34.314648  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:36.872398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:36.889087  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:36.889176  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:36.932825  959882 cri.go:89] found id: ""
	I0308 04:16:36.932850  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.932858  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:36.932864  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:36.932933  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:36.972442  959882 cri.go:89] found id: ""
	I0308 04:16:36.972476  959882 logs.go:276] 0 containers: []
	W0308 04:16:36.972488  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:36.972495  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:36.972557  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:37.019266  959882 cri.go:89] found id: ""
	I0308 04:16:37.019299  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.019313  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:37.019322  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:37.019404  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:37.070487  959882 cri.go:89] found id: ""
	I0308 04:16:37.070518  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.070528  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:37.070536  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:37.070603  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:37.112459  959882 cri.go:89] found id: ""
	I0308 04:16:37.112483  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.112492  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:37.112497  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:37.112563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:37.151483  959882 cri.go:89] found id: ""
	I0308 04:16:37.151514  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.151526  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:37.151534  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:37.151589  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:37.191157  959882 cri.go:89] found id: ""
	I0308 04:16:37.191186  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.191198  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:37.191206  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:37.191271  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:37.230913  959882 cri.go:89] found id: ""
	I0308 04:16:37.230941  959882 logs.go:276] 0 containers: []
	W0308 04:16:37.230952  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:37.230971  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:37.230988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:37.286815  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:37.286853  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:37.303326  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:37.303356  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:37.382696  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:37.382714  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:37.382729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:37.469052  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:37.469092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:38.031553  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.531839  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.842468  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.842843  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:39.572261  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:41.573148  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:40.014986  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:40.031757  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:40.031830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:40.076924  959882 cri.go:89] found id: ""
	I0308 04:16:40.076951  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.076962  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:40.076971  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:40.077030  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:40.117463  959882 cri.go:89] found id: ""
	I0308 04:16:40.117494  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.117506  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:40.117514  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:40.117593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:40.161639  959882 cri.go:89] found id: ""
	I0308 04:16:40.161672  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.161683  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:40.161690  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:40.161753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:40.199190  959882 cri.go:89] found id: ""
	I0308 04:16:40.199218  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.199227  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:40.199236  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:40.199320  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:40.236391  959882 cri.go:89] found id: ""
	I0308 04:16:40.236416  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.236426  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:40.236434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:40.236502  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:40.277595  959882 cri.go:89] found id: ""
	I0308 04:16:40.277625  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.277635  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:40.277645  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:40.277718  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:40.316460  959882 cri.go:89] found id: ""
	I0308 04:16:40.316488  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.316497  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:40.316503  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:40.316555  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:40.354988  959882 cri.go:89] found id: ""
	I0308 04:16:40.355020  959882 logs.go:276] 0 containers: []
	W0308 04:16:40.355031  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:40.355043  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:40.355058  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:40.445658  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:40.445685  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:40.445698  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:40.532181  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:40.532214  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:40.581561  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:40.581598  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:40.637015  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:40.637050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.153288  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:43.170090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:43.170183  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:43.210949  959882 cri.go:89] found id: ""
	I0308 04:16:43.210980  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.210993  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:43.211001  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:43.211067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:43.249865  959882 cri.go:89] found id: ""
	I0308 04:16:43.249890  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.249898  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:43.249904  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:43.249954  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:43.287967  959882 cri.go:89] found id: ""
	I0308 04:16:43.288000  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.288012  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:43.288020  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:43.288093  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:43.326511  959882 cri.go:89] found id: ""
	I0308 04:16:43.326542  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.326553  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:43.326562  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:43.326616  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:43.365531  959882 cri.go:89] found id: ""
	I0308 04:16:43.365560  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.365568  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:43.365574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:43.365642  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:43.407006  959882 cri.go:89] found id: ""
	I0308 04:16:43.407038  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.407050  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:43.407058  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:43.407146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:43.448401  959882 cri.go:89] found id: ""
	I0308 04:16:43.448430  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.448439  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:43.448445  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:43.448498  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:43.487079  959882 cri.go:89] found id: ""
	I0308 04:16:43.487122  959882 logs.go:276] 0 containers: []
	W0308 04:16:43.487140  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:43.487150  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:43.487164  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:43.542174  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:43.542209  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:43.557983  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:43.558008  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:43.641365  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:43.641392  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:43.641412  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:43.723791  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:43.723851  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:43.043473  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:45.530311  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.343254  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.343735  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:44.074119  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.573551  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:46.302382  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:46.316489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:46.316556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:46.356758  959882 cri.go:89] found id: ""
	I0308 04:16:46.356784  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.356793  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:46.356801  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:46.356857  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:46.395007  959882 cri.go:89] found id: ""
	I0308 04:16:46.395039  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.395051  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:46.395058  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:46.395126  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:46.432125  959882 cri.go:89] found id: ""
	I0308 04:16:46.432159  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.432172  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:46.432181  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:46.432250  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:46.470559  959882 cri.go:89] found id: ""
	I0308 04:16:46.470584  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.470593  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:46.470599  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:46.470655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:46.511654  959882 cri.go:89] found id: ""
	I0308 04:16:46.511681  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.511691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:46.511699  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:46.511769  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:46.553540  959882 cri.go:89] found id: ""
	I0308 04:16:46.553564  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.553572  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:46.553579  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:46.553626  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:46.590902  959882 cri.go:89] found id: ""
	I0308 04:16:46.590929  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.590940  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:46.590948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:46.591013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:46.631568  959882 cri.go:89] found id: ""
	I0308 04:16:46.631598  959882 logs.go:276] 0 containers: []
	W0308 04:16:46.631610  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:46.631623  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:46.631640  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:46.689248  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:46.689300  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:46.705110  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:46.705135  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:46.782434  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:46.782461  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:46.782479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:46.869583  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:46.869621  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:48.031386  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:50.529613  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:48.842960  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.341717  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.072154  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:51.072587  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.076274  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:49.417289  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:49.432408  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:49.432485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:49.470611  959882 cri.go:89] found id: ""
	I0308 04:16:49.470638  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.470646  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:49.470658  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:49.470745  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:49.530539  959882 cri.go:89] found id: ""
	I0308 04:16:49.530580  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.530592  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:49.530600  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:49.530673  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:49.580330  959882 cri.go:89] found id: ""
	I0308 04:16:49.580359  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.580371  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:49.580379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:49.580445  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:49.619258  959882 cri.go:89] found id: ""
	I0308 04:16:49.619283  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.619292  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:49.619298  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:49.619349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:49.659184  959882 cri.go:89] found id: ""
	I0308 04:16:49.659208  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.659216  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:49.659222  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:49.659273  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:49.697086  959882 cri.go:89] found id: ""
	I0308 04:16:49.697113  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.697124  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:49.697131  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:49.697195  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:49.739886  959882 cri.go:89] found id: ""
	I0308 04:16:49.739917  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.739926  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:49.739934  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:49.740004  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:49.778592  959882 cri.go:89] found id: ""
	I0308 04:16:49.778627  959882 logs.go:276] 0 containers: []
	W0308 04:16:49.778639  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:49.778651  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:49.778668  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:49.831995  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:49.832028  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:49.848879  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:49.848907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:49.931303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:49.931324  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:49.931337  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:50.017653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:50.017693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.569021  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:52.585672  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:52.585740  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:52.630344  959882 cri.go:89] found id: ""
	I0308 04:16:52.630380  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.630392  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:52.630401  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:52.630469  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:52.670698  959882 cri.go:89] found id: ""
	I0308 04:16:52.670729  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.670737  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:52.670768  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:52.670832  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:52.706785  959882 cri.go:89] found id: ""
	I0308 04:16:52.706813  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.706822  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:52.706828  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:52.706888  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:52.745334  959882 cri.go:89] found id: ""
	I0308 04:16:52.745359  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.745367  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:52.745379  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:52.745443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:52.782375  959882 cri.go:89] found id: ""
	I0308 04:16:52.782403  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.782415  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:52.782422  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:52.782489  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:52.820538  959882 cri.go:89] found id: ""
	I0308 04:16:52.820570  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.820594  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:52.820604  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:52.820671  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:52.860055  959882 cri.go:89] found id: ""
	I0308 04:16:52.860086  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.860096  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:52.860104  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:52.860161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:52.900595  959882 cri.go:89] found id: ""
	I0308 04:16:52.900625  959882 logs.go:276] 0 containers: []
	W0308 04:16:52.900636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:52.900646  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:52.900666  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:52.954619  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:52.954653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:52.971930  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:52.971960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:53.050576  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:53.050597  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:53.050610  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:53.129683  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:53.129713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:52.530787  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.031714  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.034683  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:53.342744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.342916  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.571857  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:57.572729  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:55.669809  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:55.685062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:55.685142  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:55.722031  959882 cri.go:89] found id: ""
	I0308 04:16:55.722058  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.722067  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:55.722076  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:55.722141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:55.764443  959882 cri.go:89] found id: ""
	I0308 04:16:55.764472  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.764483  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:55.764491  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:55.764562  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:55.804425  959882 cri.go:89] found id: ""
	I0308 04:16:55.804453  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.804462  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:55.804469  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:55.804538  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:55.844482  959882 cri.go:89] found id: ""
	I0308 04:16:55.844507  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.844516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:55.844522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:55.844592  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:55.884398  959882 cri.go:89] found id: ""
	I0308 04:16:55.884429  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.884442  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:55.884451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:55.884526  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:55.922172  959882 cri.go:89] found id: ""
	I0308 04:16:55.922199  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.922208  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:55.922214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:55.922286  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:55.960450  959882 cri.go:89] found id: ""
	I0308 04:16:55.960477  959882 logs.go:276] 0 containers: []
	W0308 04:16:55.960485  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:55.960491  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:55.960542  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:56.001181  959882 cri.go:89] found id: ""
	I0308 04:16:56.001215  959882 logs.go:276] 0 containers: []
	W0308 04:16:56.001227  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:56.001241  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:56.001263  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:56.058108  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:56.058143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:56.075096  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:56.075123  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:56.161390  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:56.161423  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:56.161444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:16:56.255014  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:56.255057  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:58.799995  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:16:58.815511  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:16:58.815580  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:16:58.856633  959882 cri.go:89] found id: ""
	I0308 04:16:58.856668  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.856679  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:16:58.856688  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:16:58.856774  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:16:58.898273  959882 cri.go:89] found id: ""
	I0308 04:16:58.898307  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.898318  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:16:58.898327  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:16:58.898394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:16:58.938816  959882 cri.go:89] found id: ""
	I0308 04:16:58.938846  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.938854  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:16:58.938860  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:16:58.938916  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:16:58.976613  959882 cri.go:89] found id: ""
	I0308 04:16:58.976646  959882 logs.go:276] 0 containers: []
	W0308 04:16:58.976658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:16:58.976667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:16:58.976753  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:16:59.023970  959882 cri.go:89] found id: ""
	I0308 04:16:59.024005  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.024018  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:16:59.024036  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:16:59.024100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:16:59.063463  959882 cri.go:89] found id: ""
	I0308 04:16:59.063494  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.063503  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:16:59.063510  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:16:59.063563  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:16:59.105476  959882 cri.go:89] found id: ""
	I0308 04:16:59.105506  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.105519  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:16:59.105527  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:16:59.105597  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:16:59.143862  959882 cri.go:89] found id: ""
	I0308 04:16:59.143899  959882 logs.go:276] 0 containers: []
	W0308 04:16:59.143912  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:16:59.143925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:16:59.143943  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:16:59.531587  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.031069  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.343970  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:01.841528  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:00.072105  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:02.072883  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:16:59.184165  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:16:59.184202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:16:59.238442  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:16:59.238479  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:16:59.254272  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:16:59.254304  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:16:59.329183  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:16:59.329208  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:16:59.329221  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:01.914204  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:01.934920  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:01.934995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:02.007459  959882 cri.go:89] found id: ""
	I0308 04:17:02.007486  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.007497  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:02.007505  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:02.007568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:02.046762  959882 cri.go:89] found id: ""
	I0308 04:17:02.046796  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.046806  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:02.046814  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:02.046879  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:02.092716  959882 cri.go:89] found id: ""
	I0308 04:17:02.092750  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.092763  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:02.092771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:02.092840  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:02.132660  959882 cri.go:89] found id: ""
	I0308 04:17:02.132688  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.132699  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:02.132707  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:02.132781  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:02.176847  959882 cri.go:89] found id: ""
	I0308 04:17:02.176872  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.176881  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:02.176891  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:02.176963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:02.217316  959882 cri.go:89] found id: ""
	I0308 04:17:02.217343  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.217352  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:02.217358  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:02.217413  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:02.255866  959882 cri.go:89] found id: ""
	I0308 04:17:02.255897  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.255908  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:02.255915  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:02.255983  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:02.295069  959882 cri.go:89] found id: ""
	I0308 04:17:02.295102  959882 logs.go:276] 0 containers: []
	W0308 04:17:02.295113  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:02.295125  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:02.295142  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:02.349451  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:02.349478  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:02.364176  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:02.364203  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:02.451142  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:02.451166  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:02.451182  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:02.543309  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:02.543344  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:04.530095  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:06.530232  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:03.842117  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.842913  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.843818  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:04.572579  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:07.073586  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:05.086760  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:05.102760  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:05.102830  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:05.144853  959882 cri.go:89] found id: ""
	I0308 04:17:05.144889  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.144900  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:05.144908  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:05.144980  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:05.193818  959882 cri.go:89] found id: ""
	I0308 04:17:05.193846  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.193854  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:05.193861  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:05.193927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:05.238991  959882 cri.go:89] found id: ""
	I0308 04:17:05.239018  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.239038  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:05.239046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:05.239113  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:05.283171  959882 cri.go:89] found id: ""
	I0308 04:17:05.283220  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.283231  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:05.283239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:05.283302  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:05.328113  959882 cri.go:89] found id: ""
	I0308 04:17:05.328143  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.328154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:05.328162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:05.328228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:05.366860  959882 cri.go:89] found id: ""
	I0308 04:17:05.366890  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.366900  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:05.366908  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:05.366974  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:05.403639  959882 cri.go:89] found id: ""
	I0308 04:17:05.403700  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.403710  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:05.403719  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:05.403785  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:05.442983  959882 cri.go:89] found id: ""
	I0308 04:17:05.443012  959882 logs.go:276] 0 containers: []
	W0308 04:17:05.443024  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:05.443037  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:05.443054  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:05.498560  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:05.498595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:05.513192  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:05.513220  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:05.593746  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:05.593767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:05.593780  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:05.672108  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:05.672146  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.221066  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:08.236062  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:08.236141  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:08.275632  959882 cri.go:89] found id: ""
	I0308 04:17:08.275673  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.275688  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:08.275699  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:08.275777  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:08.313891  959882 cri.go:89] found id: ""
	I0308 04:17:08.313937  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.313959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:08.313968  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:08.314053  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:08.354002  959882 cri.go:89] found id: ""
	I0308 04:17:08.354028  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.354036  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:08.354042  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:08.354106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:08.393571  959882 cri.go:89] found id: ""
	I0308 04:17:08.393599  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.393607  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:08.393614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:08.393685  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:08.433609  959882 cri.go:89] found id: ""
	I0308 04:17:08.433634  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.433652  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:08.433658  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:08.433727  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:08.476700  959882 cri.go:89] found id: ""
	I0308 04:17:08.476734  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.476744  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:08.476749  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:08.476827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:08.514870  959882 cri.go:89] found id: ""
	I0308 04:17:08.514903  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.514914  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:08.514921  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:08.514988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:08.553442  959882 cri.go:89] found id: ""
	I0308 04:17:08.553467  959882 logs.go:276] 0 containers: []
	W0308 04:17:08.553478  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:08.553490  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:08.553506  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:08.614328  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:08.614362  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:08.629172  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:08.629199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:08.704397  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:08.704425  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:08.704453  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:08.784782  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:08.784820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:08.531066  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.036465  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:10.342187  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:12.342932  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:09.572656  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.574027  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:11.338084  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:11.352680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:11.352758  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:11.392487  959882 cri.go:89] found id: ""
	I0308 04:17:11.392520  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.392529  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:11.392535  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:11.392586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:11.431150  959882 cri.go:89] found id: ""
	I0308 04:17:11.431181  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.431189  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:11.431196  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:11.431254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:11.469526  959882 cri.go:89] found id: ""
	I0308 04:17:11.469559  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.469570  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:11.469578  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:11.469646  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:11.515424  959882 cri.go:89] found id: ""
	I0308 04:17:11.515447  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.515455  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:11.515461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:11.515514  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:11.558962  959882 cri.go:89] found id: ""
	I0308 04:17:11.558993  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.559003  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:11.559011  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:11.559074  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:11.600104  959882 cri.go:89] found id: ""
	I0308 04:17:11.600128  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.600138  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:11.600145  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:11.600200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:11.637771  959882 cri.go:89] found id: ""
	I0308 04:17:11.637800  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.637811  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:11.637818  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:11.637900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:11.677597  959882 cri.go:89] found id: ""
	I0308 04:17:11.677628  959882 logs.go:276] 0 containers: []
	W0308 04:17:11.677636  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:11.677648  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:11.677664  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:11.719498  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:11.719527  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:11.778019  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:11.778052  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:11.794019  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:11.794048  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:11.867037  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:11.867120  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:11.867143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:13.530159  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:15.530802  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.343432  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.842378  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.072310  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:16.072750  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:14.447761  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:14.462355  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:14.462447  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:14.502718  959882 cri.go:89] found id: ""
	I0308 04:17:14.502759  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.502770  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:14.502777  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:14.502843  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:14.540505  959882 cri.go:89] found id: ""
	I0308 04:17:14.540531  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.540538  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:14.540546  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:14.540604  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:14.582272  959882 cri.go:89] found id: ""
	I0308 04:17:14.582303  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.582314  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:14.582321  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:14.582398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:14.624249  959882 cri.go:89] found id: ""
	I0308 04:17:14.624279  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.624291  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:14.624299  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:14.624367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:14.661041  959882 cri.go:89] found id: ""
	I0308 04:17:14.661070  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.661079  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:14.661084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:14.661153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:14.698847  959882 cri.go:89] found id: ""
	I0308 04:17:14.698878  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.698885  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:14.698894  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:14.698948  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:14.741118  959882 cri.go:89] found id: ""
	I0308 04:17:14.741150  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.741162  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:14.741170  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:14.741240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:14.778875  959882 cri.go:89] found id: ""
	I0308 04:17:14.778908  959882 logs.go:276] 0 containers: []
	W0308 04:17:14.778920  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:14.778932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:14.778949  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:14.830526  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:14.830558  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:14.845449  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:14.845481  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:14.924510  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:14.924540  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:14.924556  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:15.008982  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:15.009020  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:17.555836  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:17.571594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:17.571665  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:17.616689  959882 cri.go:89] found id: ""
	I0308 04:17:17.616722  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.616734  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:17.616742  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:17.616807  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:17.659137  959882 cri.go:89] found id: ""
	I0308 04:17:17.659166  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.659178  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:17.659186  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:17.659255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:17.696520  959882 cri.go:89] found id: ""
	I0308 04:17:17.696555  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.696565  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:17.696574  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:17.696633  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:17.734406  959882 cri.go:89] found id: ""
	I0308 04:17:17.734440  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.734453  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:17.734461  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:17.734527  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:17.771905  959882 cri.go:89] found id: ""
	I0308 04:17:17.771938  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.771950  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:17.771958  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:17.772026  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:17.809100  959882 cri.go:89] found id: ""
	I0308 04:17:17.809137  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.809149  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:17.809157  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:17.809218  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:17.849365  959882 cri.go:89] found id: ""
	I0308 04:17:17.849413  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.849425  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:17.849433  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:17.849519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:17.886016  959882 cri.go:89] found id: ""
	I0308 04:17:17.886049  959882 logs.go:276] 0 containers: []
	W0308 04:17:17.886060  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:17.886072  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:17.886092  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:17.964117  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:17.964149  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:17.964166  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:18.055953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:18.055998  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:18.105081  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:18.105116  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:18.159996  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:18.160031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:18.031032  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.531869  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.842750  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.844061  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:18.572291  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:21.072983  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:20.676464  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:20.692705  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:20.692786  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:20.731660  959882 cri.go:89] found id: ""
	I0308 04:17:20.731688  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.731697  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:20.731703  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:20.731754  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:20.768124  959882 cri.go:89] found id: ""
	I0308 04:17:20.768150  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.768158  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:20.768164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:20.768285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:20.805890  959882 cri.go:89] found id: ""
	I0308 04:17:20.805914  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.805923  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:20.805932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:20.805995  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:20.848376  959882 cri.go:89] found id: ""
	I0308 04:17:20.848402  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.848412  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:20.848421  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:20.848493  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:20.888354  959882 cri.go:89] found id: ""
	I0308 04:17:20.888385  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.888397  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:20.888405  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:20.888475  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:20.934680  959882 cri.go:89] found id: ""
	I0308 04:17:20.934710  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.934724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:20.934734  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:20.934805  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:20.972505  959882 cri.go:89] found id: ""
	I0308 04:17:20.972540  959882 logs.go:276] 0 containers: []
	W0308 04:17:20.972552  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:20.972561  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:20.972629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:21.011917  959882 cri.go:89] found id: ""
	I0308 04:17:21.011947  959882 logs.go:276] 0 containers: []
	W0308 04:17:21.011958  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:21.011970  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:21.011988  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:21.071906  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:21.071938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:21.086822  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:21.086846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:21.165303  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:21.165331  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:21.165349  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:21.245847  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:21.245884  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:23.788459  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:23.804549  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:23.804629  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:23.841572  959882 cri.go:89] found id: ""
	I0308 04:17:23.841607  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.841618  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:23.841627  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:23.841691  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:23.884107  959882 cri.go:89] found id: ""
	I0308 04:17:23.884145  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.884155  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:23.884164  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:23.884234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:23.923334  959882 cri.go:89] found id: ""
	I0308 04:17:23.923364  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.923376  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:23.923383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:23.923468  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:23.964766  959882 cri.go:89] found id: ""
	I0308 04:17:23.964800  959882 logs.go:276] 0 containers: []
	W0308 04:17:23.964812  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:23.964820  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:23.964884  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:24.002201  959882 cri.go:89] found id: ""
	I0308 04:17:24.002229  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.002238  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:24.002248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:24.002305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:24.046986  959882 cri.go:89] found id: ""
	I0308 04:17:24.047017  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.047025  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:24.047031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:24.047090  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:24.085805  959882 cri.go:89] found id: ""
	I0308 04:17:24.085831  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.085839  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:24.085845  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:24.085898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:24.123907  959882 cri.go:89] found id: ""
	I0308 04:17:24.123941  959882 logs.go:276] 0 containers: []
	W0308 04:17:24.123951  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:24.123965  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:24.123984  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:22.534242  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.033813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.345284  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:25.346410  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:27.841793  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:23.573068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:26.072073  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:24.180674  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:24.180715  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:24.195166  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:24.195196  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:24.292487  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:24.292512  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:24.292529  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:24.385425  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:24.385460  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:26.931524  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:26.946108  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:26.946165  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:26.985883  959882 cri.go:89] found id: ""
	I0308 04:17:26.985910  959882 logs.go:276] 0 containers: []
	W0308 04:17:26.985918  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:26.985928  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:26.985990  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:27.027957  959882 cri.go:89] found id: ""
	I0308 04:17:27.028003  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.028014  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:27.028024  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:27.028091  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:27.071671  959882 cri.go:89] found id: ""
	I0308 04:17:27.071755  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.071771  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:27.071780  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:27.071846  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:27.116639  959882 cri.go:89] found id: ""
	I0308 04:17:27.116673  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.116685  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:27.116694  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:27.116759  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:27.153287  959882 cri.go:89] found id: ""
	I0308 04:17:27.153314  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.153323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:27.153330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:27.153380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:27.196736  959882 cri.go:89] found id: ""
	I0308 04:17:27.196774  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.196787  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:27.196795  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:27.196867  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:27.233931  959882 cri.go:89] found id: ""
	I0308 04:17:27.233967  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.233978  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:27.233986  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:27.234057  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:27.273217  959882 cri.go:89] found id: ""
	I0308 04:17:27.273249  959882 logs.go:276] 0 containers: []
	W0308 04:17:27.273259  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:27.273294  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:27.273316  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:27.326798  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:27.326831  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:27.341897  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:27.341927  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:27.420060  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:27.420086  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:27.420104  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:27.506318  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:27.506355  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:27.531758  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.031082  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:29.842395  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.844163  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:28.573265  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:31.071578  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.071848  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:30.052902  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:30.068134  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:30.068224  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:30.107384  959882 cri.go:89] found id: ""
	I0308 04:17:30.107413  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.107422  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:30.107429  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:30.107485  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:30.149470  959882 cri.go:89] found id: ""
	I0308 04:17:30.149508  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.149520  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:30.149529  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:30.149606  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:30.191584  959882 cri.go:89] found id: ""
	I0308 04:17:30.191618  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.191631  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:30.191639  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:30.191715  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:30.235835  959882 cri.go:89] found id: ""
	I0308 04:17:30.235867  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.235880  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:30.235888  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:30.235963  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:30.292453  959882 cri.go:89] found id: ""
	I0308 04:17:30.292483  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.292494  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:30.292502  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:30.292571  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:30.333882  959882 cri.go:89] found id: ""
	I0308 04:17:30.333914  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.333926  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:30.333935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:30.334005  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:30.385385  959882 cri.go:89] found id: ""
	I0308 04:17:30.385420  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.385431  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:30.385439  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:30.385504  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:30.426338  959882 cri.go:89] found id: ""
	I0308 04:17:30.426366  959882 logs.go:276] 0 containers: []
	W0308 04:17:30.426376  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:30.426386  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:30.426401  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:30.484281  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:30.484320  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:30.500824  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:30.500858  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:30.584767  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:30.584803  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:30.584820  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:30.672226  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:30.672269  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:33.218403  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:33.234090  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:33.234156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:33.280149  959882 cri.go:89] found id: ""
	I0308 04:17:33.280183  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.280195  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:33.280203  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:33.280285  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:33.324537  959882 cri.go:89] found id: ""
	I0308 04:17:33.324566  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.324578  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:33.324590  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:33.324670  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:33.368466  959882 cri.go:89] found id: ""
	I0308 04:17:33.368498  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.368510  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:33.368517  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:33.368582  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:33.409950  959882 cri.go:89] found id: ""
	I0308 04:17:33.409980  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.409998  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:33.410006  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:33.410070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:33.452073  959882 cri.go:89] found id: ""
	I0308 04:17:33.452104  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.452116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:33.452125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:33.452197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:33.489568  959882 cri.go:89] found id: ""
	I0308 04:17:33.489596  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.489604  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:33.489614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:33.489676  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:33.526169  959882 cri.go:89] found id: ""
	I0308 04:17:33.526196  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.526206  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:33.526214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:33.526281  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:33.564686  959882 cri.go:89] found id: ""
	I0308 04:17:33.564712  959882 logs.go:276] 0 containers: []
	W0308 04:17:33.564721  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:33.564730  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:33.564743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:33.618119  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:33.618152  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:33.633675  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:33.633713  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:33.722357  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:33.722379  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:33.722393  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:33.802657  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:33.802694  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:32.530211  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:34.531039  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.531654  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:33.844353  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.344661  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:35.072184  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:37.073012  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:36.346274  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:36.362007  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:36.362087  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:36.402910  959882 cri.go:89] found id: ""
	I0308 04:17:36.402941  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.402951  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:36.402957  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:36.403017  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:36.442936  959882 cri.go:89] found id: ""
	I0308 04:17:36.442968  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.442979  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:36.442986  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:36.443040  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:36.481292  959882 cri.go:89] found id: ""
	I0308 04:17:36.481321  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.481330  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:36.481336  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:36.481392  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:36.519748  959882 cri.go:89] found id: ""
	I0308 04:17:36.519772  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.519780  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:36.519787  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:36.519851  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:36.560104  959882 cri.go:89] found id: ""
	I0308 04:17:36.560130  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.560138  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:36.560143  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:36.560197  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:36.601983  959882 cri.go:89] found id: ""
	I0308 04:17:36.602010  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.602018  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:36.602024  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:36.602075  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:36.639441  959882 cri.go:89] found id: ""
	I0308 04:17:36.639468  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.639476  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:36.639482  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:36.639548  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:36.693512  959882 cri.go:89] found id: ""
	I0308 04:17:36.693541  959882 logs.go:276] 0 containers: []
	W0308 04:17:36.693551  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:36.693561  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:36.693573  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:36.712753  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:36.712789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:36.831565  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:36.831589  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:36.831613  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:36.911119  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:36.911157  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:36.955099  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:36.955143  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.032124  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.032170  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:38.843337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:41.341869  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.573505  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:42.072317  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:39.509129  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:39.525372  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:39.525434  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:39.564783  959882 cri.go:89] found id: ""
	I0308 04:17:39.564815  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.564828  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:39.564836  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:39.564900  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:39.606183  959882 cri.go:89] found id: ""
	I0308 04:17:39.606209  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.606220  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:39.606228  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:39.606305  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:39.649860  959882 cri.go:89] found id: ""
	I0308 04:17:39.649890  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.649898  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:39.649905  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:39.649966  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:39.699333  959882 cri.go:89] found id: ""
	I0308 04:17:39.699358  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.699374  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:39.699383  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:39.699446  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:39.737266  959882 cri.go:89] found id: ""
	I0308 04:17:39.737311  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.737320  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:39.737329  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:39.737400  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:39.786067  959882 cri.go:89] found id: ""
	I0308 04:17:39.786098  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.786109  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:39.786126  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:39.786196  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:39.833989  959882 cri.go:89] found id: ""
	I0308 04:17:39.834017  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.834025  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:39.834031  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:39.834100  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:39.874712  959882 cri.go:89] found id: ""
	I0308 04:17:39.874740  959882 logs.go:276] 0 containers: []
	W0308 04:17:39.874750  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:39.874761  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:39.874774  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:39.929495  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:39.929532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:39.944336  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:39.944367  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:40.023748  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:40.023774  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:40.023789  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:40.107405  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:40.107444  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:42.652355  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:42.671032  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:42.671102  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:42.722291  959882 cri.go:89] found id: ""
	I0308 04:17:42.722322  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.722335  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:42.722343  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:42.722411  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:42.767668  959882 cri.go:89] found id: ""
	I0308 04:17:42.767705  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.767776  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:42.767796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:42.767863  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:42.819452  959882 cri.go:89] found id: ""
	I0308 04:17:42.819492  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.819505  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:42.819513  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:42.819587  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:42.860996  959882 cri.go:89] found id: ""
	I0308 04:17:42.861025  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.861038  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:42.861046  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:42.861117  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:42.898846  959882 cri.go:89] found id: ""
	I0308 04:17:42.898880  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.898892  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:42.898899  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:42.898955  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:42.941193  959882 cri.go:89] found id: ""
	I0308 04:17:42.941226  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.941237  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:42.941247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:42.941334  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:42.984611  959882 cri.go:89] found id: ""
	I0308 04:17:42.984644  959882 logs.go:276] 0 containers: []
	W0308 04:17:42.984656  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:42.984665  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:42.984732  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:43.023518  959882 cri.go:89] found id: ""
	I0308 04:17:43.023543  959882 logs.go:276] 0 containers: []
	W0308 04:17:43.023552  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:43.023562  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:43.023575  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:43.105773  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:43.105798  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:43.105815  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:43.191641  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:43.191684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:43.234424  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:43.234463  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:43.285871  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:43.285908  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:43.038213  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.529384  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:43.346871  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.842000  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.843164  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:44.572721  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:47.072177  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:45.801565  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:45.816939  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:45.817022  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:45.854790  959882 cri.go:89] found id: ""
	I0308 04:17:45.854816  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.854825  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:45.854833  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:45.854899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:45.898272  959882 cri.go:89] found id: ""
	I0308 04:17:45.898299  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.898311  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:45.898318  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:45.898385  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:45.937664  959882 cri.go:89] found id: ""
	I0308 04:17:45.937700  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.937712  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:45.937720  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:45.937797  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:45.976278  959882 cri.go:89] found id: ""
	I0308 04:17:45.976310  959882 logs.go:276] 0 containers: []
	W0308 04:17:45.976320  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:45.976328  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:45.976409  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:46.012953  959882 cri.go:89] found id: ""
	I0308 04:17:46.012983  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.012994  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:46.013001  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:46.013071  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:46.053462  959882 cri.go:89] found id: ""
	I0308 04:17:46.053489  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.053498  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:46.053504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:46.053569  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:46.095221  959882 cri.go:89] found id: ""
	I0308 04:17:46.095252  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.095264  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:46.095276  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:46.095396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:46.134890  959882 cri.go:89] found id: ""
	I0308 04:17:46.134914  959882 logs.go:276] 0 containers: []
	W0308 04:17:46.134922  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:46.134932  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:46.134948  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:46.188788  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:46.188823  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:46.203843  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:46.203877  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:46.279846  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:46.279872  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:46.279889  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:46.359747  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:46.359784  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:48.912993  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:48.927992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:48.928065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:48.966498  959882 cri.go:89] found id: ""
	I0308 04:17:48.966529  959882 logs.go:276] 0 containers: []
	W0308 04:17:48.966537  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:48.966543  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:48.966594  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:49.005372  959882 cri.go:89] found id: ""
	I0308 04:17:49.005406  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.005420  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:49.005428  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:49.005492  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:49.049064  959882 cri.go:89] found id: ""
	I0308 04:17:49.049107  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.049120  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:49.049129  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:49.049206  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:49.091743  959882 cri.go:89] found id: ""
	I0308 04:17:49.091770  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.091778  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:49.091784  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:49.091836  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:49.138158  959882 cri.go:89] found id: ""
	I0308 04:17:49.138198  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.138211  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:49.138220  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:49.138293  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:47.532313  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.030625  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.031556  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:50.343306  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:52.841950  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.074229  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:51.572609  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:49.180273  959882 cri.go:89] found id: ""
	I0308 04:17:49.180314  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.180323  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:49.180330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:49.180393  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:49.220219  959882 cri.go:89] found id: ""
	I0308 04:17:49.220260  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.220273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:49.220280  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:49.220350  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:49.263653  959882 cri.go:89] found id: ""
	I0308 04:17:49.263687  959882 logs.go:276] 0 containers: []
	W0308 04:17:49.263700  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:49.263742  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:49.263766  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:49.279585  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:49.279623  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:49.355373  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:49.355397  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:49.355411  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:49.440302  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:49.440341  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:49.482642  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:49.482680  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.038469  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:52.053465  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:52.053549  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:52.097994  959882 cri.go:89] found id: ""
	I0308 04:17:52.098022  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.098033  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:52.098042  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:52.098123  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:52.141054  959882 cri.go:89] found id: ""
	I0308 04:17:52.141084  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.141096  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:52.141103  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:52.141169  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:52.181460  959882 cri.go:89] found id: ""
	I0308 04:17:52.181489  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.181498  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:52.181504  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:52.181556  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:52.219024  959882 cri.go:89] found id: ""
	I0308 04:17:52.219054  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.219063  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:52.219069  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:52.219134  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:52.262107  959882 cri.go:89] found id: ""
	I0308 04:17:52.262138  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.262149  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:52.262158  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:52.262213  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:52.302158  959882 cri.go:89] found id: ""
	I0308 04:17:52.302191  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.302204  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:52.302214  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:52.302284  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:52.349782  959882 cri.go:89] found id: ""
	I0308 04:17:52.349811  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.349820  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:52.349826  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:52.349892  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:52.388691  959882 cri.go:89] found id: ""
	I0308 04:17:52.388717  959882 logs.go:276] 0 containers: []
	W0308 04:17:52.388726  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:52.388736  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:52.388755  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:52.461374  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:52.461395  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:52.461410  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:52.543953  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:52.543990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:52.593148  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:52.593187  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:52.647954  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:52.648006  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:54.034351  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.529938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.845337  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:57.342184  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:54.071941  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:56.072263  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:58.072968  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:55.164361  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:55.179301  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:55.179367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:55.224203  959882 cri.go:89] found id: ""
	I0308 04:17:55.224230  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.224240  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:55.224250  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:55.224324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:55.268442  959882 cri.go:89] found id: ""
	I0308 04:17:55.268470  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.268481  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:55.268488  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:55.268552  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:55.312953  959882 cri.go:89] found id: ""
	I0308 04:17:55.312980  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.312991  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:55.313000  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:55.313065  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:55.352718  959882 cri.go:89] found id: ""
	I0308 04:17:55.352753  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.352763  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:55.352771  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:55.352837  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:55.398676  959882 cri.go:89] found id: ""
	I0308 04:17:55.398707  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.398719  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:55.398727  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:55.398795  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:55.441936  959882 cri.go:89] found id: ""
	I0308 04:17:55.441972  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.441984  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:55.441992  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:55.442062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:55.480896  959882 cri.go:89] found id: ""
	I0308 04:17:55.480932  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.480944  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:55.480952  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:55.481013  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:55.519385  959882 cri.go:89] found id: ""
	I0308 04:17:55.519416  959882 logs.go:276] 0 containers: []
	W0308 04:17:55.519425  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:55.519436  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:55.519450  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:55.577904  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:55.577937  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:55.593932  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:55.593958  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:55.681970  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:55.681995  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:55.682009  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:55.765653  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:55.765693  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.315540  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:17:58.330702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:17:58.330776  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:17:58.370957  959882 cri.go:89] found id: ""
	I0308 04:17:58.370990  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.371002  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:17:58.371011  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:17:58.371076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:17:58.412776  959882 cri.go:89] found id: ""
	I0308 04:17:58.412817  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.412830  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:17:58.412838  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:17:58.412915  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:17:58.449819  959882 cri.go:89] found id: ""
	I0308 04:17:58.449852  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.449869  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:17:58.449877  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:17:58.449947  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:17:58.487823  959882 cri.go:89] found id: ""
	I0308 04:17:58.487856  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.487869  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:17:58.487878  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:17:58.487944  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:17:58.531075  959882 cri.go:89] found id: ""
	I0308 04:17:58.531107  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.531117  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:17:58.531125  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:17:58.531191  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:17:58.567775  959882 cri.go:89] found id: ""
	I0308 04:17:58.567806  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.567816  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:17:58.567824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:17:58.567899  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:17:58.608297  959882 cri.go:89] found id: ""
	I0308 04:17:58.608324  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.608339  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:17:58.608346  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:17:58.608412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:17:58.647443  959882 cri.go:89] found id: ""
	I0308 04:17:58.647473  959882 logs.go:276] 0 containers: []
	W0308 04:17:58.647484  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:17:58.647495  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:17:58.647513  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:17:58.701854  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:17:58.701885  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:17:58.717015  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:17:58.717044  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:17:58.788218  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:17:58.788248  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:17:58.788264  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:17:58.872665  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:17:58.872707  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:17:58.532504  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.032813  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:17:59.346922  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.845023  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:00.078299  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:02.574456  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:01.421097  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:01.435489  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:01.435553  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:01.481339  959882 cri.go:89] found id: ""
	I0308 04:18:01.481370  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.481379  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:01.481385  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:01.481452  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:01.517289  959882 cri.go:89] found id: ""
	I0308 04:18:01.517324  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.517335  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:01.517342  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:01.517407  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:01.555205  959882 cri.go:89] found id: ""
	I0308 04:18:01.555235  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.555242  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:01.555248  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:01.555316  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:01.592256  959882 cri.go:89] found id: ""
	I0308 04:18:01.592280  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.592288  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:01.592294  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:01.592351  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:01.634929  959882 cri.go:89] found id: ""
	I0308 04:18:01.634958  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.634967  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:01.634973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:01.635025  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:01.676771  959882 cri.go:89] found id: ""
	I0308 04:18:01.676797  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.676805  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:01.676812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:01.676868  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:01.718632  959882 cri.go:89] found id: ""
	I0308 04:18:01.718663  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.718673  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:01.718680  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:01.718751  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:01.753772  959882 cri.go:89] found id: ""
	I0308 04:18:01.753802  959882 logs.go:276] 0 containers: []
	W0308 04:18:01.753813  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:01.753827  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:01.753844  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:01.801364  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:01.801394  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:01.854697  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:01.854729  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:01.870115  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:01.870141  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:01.941652  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:01.941676  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:01.941691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:03.035185  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:05.530549  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.344096  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:06.841204  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.579905  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:07.073136  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:04.525984  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:04.541436  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:04.541512  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:04.580670  959882 cri.go:89] found id: ""
	I0308 04:18:04.580695  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.580705  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:04.580713  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:04.580779  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:04.625683  959882 cri.go:89] found id: ""
	I0308 04:18:04.625712  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.625722  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:04.625730  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:04.625806  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:04.664669  959882 cri.go:89] found id: ""
	I0308 04:18:04.664703  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.664715  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:04.664723  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:04.664792  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:04.711983  959882 cri.go:89] found id: ""
	I0308 04:18:04.712011  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.712022  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:04.712030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:04.712097  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:04.753030  959882 cri.go:89] found id: ""
	I0308 04:18:04.753061  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.753075  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:04.753083  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:04.753153  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:04.804201  959882 cri.go:89] found id: ""
	I0308 04:18:04.804233  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.804246  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:04.804254  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:04.804349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:04.843425  959882 cri.go:89] found id: ""
	I0308 04:18:04.843457  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.843468  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:04.843475  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:04.843541  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:04.898911  959882 cri.go:89] found id: ""
	I0308 04:18:04.898943  959882 logs.go:276] 0 containers: []
	W0308 04:18:04.898954  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:04.898997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:04.899023  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:04.954840  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:04.954879  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:04.972476  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:04.972508  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:05.053733  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:05.053759  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:05.053775  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:05.139701  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:05.139733  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:07.691432  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:07.707285  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:07.707366  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:07.744936  959882 cri.go:89] found id: ""
	I0308 04:18:07.744966  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.744977  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:07.744987  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:07.745056  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:07.781761  959882 cri.go:89] found id: ""
	I0308 04:18:07.781793  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.781804  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:07.781812  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:07.781887  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:07.818818  959882 cri.go:89] found id: ""
	I0308 04:18:07.818846  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.818857  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:07.818865  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:07.818934  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:07.857011  959882 cri.go:89] found id: ""
	I0308 04:18:07.857038  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.857048  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:07.857056  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:07.857108  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:07.902836  959882 cri.go:89] found id: ""
	I0308 04:18:07.902869  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.902883  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:07.902890  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:07.902957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:07.941130  959882 cri.go:89] found id: ""
	I0308 04:18:07.941166  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.941176  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:07.941186  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:07.941254  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:07.979955  959882 cri.go:89] found id: ""
	I0308 04:18:07.979988  959882 logs.go:276] 0 containers: []
	W0308 04:18:07.979996  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:07.980002  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:07.980070  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:08.022877  959882 cri.go:89] found id: ""
	I0308 04:18:08.022902  959882 logs.go:276] 0 containers: []
	W0308 04:18:08.022910  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:08.022921  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:08.022934  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:08.040581  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:08.040609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:08.113610  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:08.113636  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:08.113653  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:08.196662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:08.196705  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:08.243138  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:08.243177  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:07.530653  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.030705  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:08.841789  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.843472  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:09.572514  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:12.071868  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:10.797931  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:10.813219  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:10.813306  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:10.854473  959882 cri.go:89] found id: ""
	I0308 04:18:10.854496  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.854504  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:10.854510  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:10.854560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:10.892537  959882 cri.go:89] found id: ""
	I0308 04:18:10.892560  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.892567  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:10.892574  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:10.892644  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:10.931135  959882 cri.go:89] found id: ""
	I0308 04:18:10.931169  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.931182  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:10.931190  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:10.931265  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:10.969480  959882 cri.go:89] found id: ""
	I0308 04:18:10.969505  959882 logs.go:276] 0 containers: []
	W0308 04:18:10.969512  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:10.969518  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:10.969568  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:11.006058  959882 cri.go:89] found id: ""
	I0308 04:18:11.006082  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.006091  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:11.006097  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:11.006156  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:11.071128  959882 cri.go:89] found id: ""
	I0308 04:18:11.071153  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.071161  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:11.071168  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:11.071228  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:11.113318  959882 cri.go:89] found id: ""
	I0308 04:18:11.113345  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.113353  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:11.113359  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:11.113420  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:11.149717  959882 cri.go:89] found id: ""
	I0308 04:18:11.149749  959882 logs.go:276] 0 containers: []
	W0308 04:18:11.149759  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:11.149768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:11.149782  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:11.200794  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:11.200828  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:11.216405  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:11.216431  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:11.291392  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:11.291428  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:11.291445  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:11.380296  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:11.380332  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:13.930398  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:13.944957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:13.945023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:13.984671  959882 cri.go:89] found id: ""
	I0308 04:18:13.984702  959882 logs.go:276] 0 containers: []
	W0308 04:18:13.984715  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:13.984724  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:13.984799  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:14.023049  959882 cri.go:89] found id: ""
	I0308 04:18:14.023078  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.023102  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:14.023112  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:14.023200  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:14.068393  959882 cri.go:89] found id: ""
	I0308 04:18:14.068420  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.068428  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:14.068435  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:14.068496  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:14.107499  959882 cri.go:89] found id: ""
	I0308 04:18:14.107527  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.107535  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:14.107541  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:14.107593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:14.145612  959882 cri.go:89] found id: ""
	I0308 04:18:14.145640  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.145650  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:14.145657  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:14.145724  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:12.529589  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.530410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.531442  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:13.343065  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:15.842764  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:17.843038  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.075166  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:16.572575  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:14.183668  959882 cri.go:89] found id: ""
	I0308 04:18:14.183696  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.183708  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:14.183717  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:14.183791  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:14.222183  959882 cri.go:89] found id: ""
	I0308 04:18:14.222219  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.222230  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:14.222239  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:14.222311  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:14.261944  959882 cri.go:89] found id: ""
	I0308 04:18:14.261971  959882 logs.go:276] 0 containers: []
	W0308 04:18:14.261979  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:14.261990  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:14.262003  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:14.308195  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:14.308229  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:14.362209  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:14.362245  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:14.379079  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:14.379107  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:14.458886  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:14.458915  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:14.458929  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.040295  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:17.059434  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:17.059513  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:17.100101  959882 cri.go:89] found id: ""
	I0308 04:18:17.100132  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.100142  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:17.100149  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:17.100209  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:17.144821  959882 cri.go:89] found id: ""
	I0308 04:18:17.144846  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.144857  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:17.144863  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:17.144923  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:17.188612  959882 cri.go:89] found id: ""
	I0308 04:18:17.188646  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.188666  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:17.188676  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:17.188746  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:17.229613  959882 cri.go:89] found id: ""
	I0308 04:18:17.229645  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.229658  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:17.229667  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:17.229741  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:17.267280  959882 cri.go:89] found id: ""
	I0308 04:18:17.267311  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.267323  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:17.267331  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:17.267394  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:17.306925  959882 cri.go:89] found id: ""
	I0308 04:18:17.306966  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.306978  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:17.306987  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:17.307051  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:17.352436  959882 cri.go:89] found id: ""
	I0308 04:18:17.352466  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.352479  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:17.352488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:17.352560  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:17.394701  959882 cri.go:89] found id: ""
	I0308 04:18:17.394739  959882 logs.go:276] 0 containers: []
	W0308 04:18:17.394753  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:17.394768  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:17.394786  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:17.454373  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:17.454427  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:17.470032  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:17.470062  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:17.545395  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:17.545415  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:17.545429  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:17.637981  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:17.638018  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:19.034860  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:21.529375  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.344154  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:22.842828  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:18.572712  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.575585  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:23.073432  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:20.185312  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:20.200794  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:20.200872  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:20.241563  959882 cri.go:89] found id: ""
	I0308 04:18:20.241596  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.241609  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:20.241617  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:20.241692  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:20.277687  959882 cri.go:89] found id: ""
	I0308 04:18:20.277718  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.277731  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:20.277739  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:20.277802  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:20.316583  959882 cri.go:89] found id: ""
	I0308 04:18:20.316612  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.316623  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:20.316630  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:20.316694  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:20.356950  959882 cri.go:89] found id: ""
	I0308 04:18:20.357006  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.357018  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:20.357030  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:20.357104  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:20.398113  959882 cri.go:89] found id: ""
	I0308 04:18:20.398141  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.398154  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:20.398162  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:20.398215  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:20.435127  959882 cri.go:89] found id: ""
	I0308 04:18:20.435159  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.435170  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:20.435178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:20.435247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:20.480279  959882 cri.go:89] found id: ""
	I0308 04:18:20.480306  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.480314  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:20.480320  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:20.480380  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:20.517629  959882 cri.go:89] found id: ""
	I0308 04:18:20.517657  959882 logs.go:276] 0 containers: []
	W0308 04:18:20.517669  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:20.517682  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:20.517709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:20.575981  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:20.576013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:20.591454  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:20.591486  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:20.673154  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:20.673180  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:20.673198  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:20.752004  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:20.752042  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.294901  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:23.310935  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:23.310998  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:23.354357  959882 cri.go:89] found id: ""
	I0308 04:18:23.354388  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.354398  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:23.354406  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:23.354470  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:23.395603  959882 cri.go:89] found id: ""
	I0308 04:18:23.395633  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.395641  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:23.395667  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:23.395733  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:23.435836  959882 cri.go:89] found id: ""
	I0308 04:18:23.435864  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.435873  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:23.435879  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:23.435988  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:23.477483  959882 cri.go:89] found id: ""
	I0308 04:18:23.477508  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.477516  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:23.477522  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:23.477573  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:23.519892  959882 cri.go:89] found id: ""
	I0308 04:18:23.519917  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.519926  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:23.519932  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:23.519996  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:23.562814  959882 cri.go:89] found id: ""
	I0308 04:18:23.562835  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.562843  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:23.562849  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:23.562906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:23.604311  959882 cri.go:89] found id: ""
	I0308 04:18:23.604342  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.604350  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:23.604356  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:23.604408  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:23.643221  959882 cri.go:89] found id: ""
	I0308 04:18:23.643252  959882 logs.go:276] 0 containers: []
	W0308 04:18:23.643263  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:23.643276  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:23.643291  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:23.749308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:23.749336  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:23.749359  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:23.849996  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:23.850027  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:23.895997  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:23.896031  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:23.952267  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:23.952318  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:23.531212  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.031884  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.342243  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.342282  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:25.572487  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:27.574158  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:26.468449  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:26.482055  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:26.482139  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:26.521589  959882 cri.go:89] found id: ""
	I0308 04:18:26.521613  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.521621  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:26.521628  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:26.521677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:26.564903  959882 cri.go:89] found id: ""
	I0308 04:18:26.564934  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.564946  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:26.564953  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:26.565021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:26.604911  959882 cri.go:89] found id: ""
	I0308 04:18:26.604938  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.604949  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:26.604956  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:26.605024  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:26.642763  959882 cri.go:89] found id: ""
	I0308 04:18:26.642797  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.642808  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:26.642815  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:26.642877  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:26.685349  959882 cri.go:89] found id: ""
	I0308 04:18:26.685385  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.685398  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:26.685406  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:26.685474  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:26.725235  959882 cri.go:89] found id: ""
	I0308 04:18:26.725260  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.725268  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:26.725284  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:26.725346  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:26.763029  959882 cri.go:89] found id: ""
	I0308 04:18:26.763057  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.763068  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:26.763076  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:26.763140  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:26.802668  959882 cri.go:89] found id: ""
	I0308 04:18:26.802699  959882 logs.go:276] 0 containers: []
	W0308 04:18:26.802711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:26.802731  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:26.802749  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:26.862622  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:26.862667  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:26.879467  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:26.879499  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:26.955714  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:26.955742  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:26.955758  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:27.037466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:27.037501  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:28.530149  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.530426  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.343054  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:31.841865  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:30.073463  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:32.074620  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:29.581945  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:29.602053  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:29.602115  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:29.656718  959882 cri.go:89] found id: ""
	I0308 04:18:29.656748  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.656757  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:29.656763  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:29.656827  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:29.717426  959882 cri.go:89] found id: ""
	I0308 04:18:29.717454  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.717464  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:29.717473  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:29.717540  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:29.768923  959882 cri.go:89] found id: ""
	I0308 04:18:29.768957  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.768970  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:29.768979  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:29.769050  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:29.808020  959882 cri.go:89] found id: ""
	I0308 04:18:29.808047  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.808058  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:29.808065  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:29.808135  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:29.848555  959882 cri.go:89] found id: ""
	I0308 04:18:29.848581  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.848589  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:29.848594  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:29.848645  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:29.887975  959882 cri.go:89] found id: ""
	I0308 04:18:29.888001  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.888008  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:29.888015  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:29.888067  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:29.926574  959882 cri.go:89] found id: ""
	I0308 04:18:29.926612  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.926621  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:29.926627  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:29.926677  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:29.963060  959882 cri.go:89] found id: ""
	I0308 04:18:29.963090  959882 logs.go:276] 0 containers: []
	W0308 04:18:29.963103  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:29.963115  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:29.963131  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:30.016965  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:30.017002  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:30.033171  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:30.033200  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:30.113858  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:30.113889  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:30.113907  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:30.195466  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:30.195503  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:32.741402  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:32.755093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:32.755181  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:32.793136  959882 cri.go:89] found id: ""
	I0308 04:18:32.793179  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.793188  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:32.793195  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:32.793291  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:32.829963  959882 cri.go:89] found id: ""
	I0308 04:18:32.829997  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.830010  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:32.830018  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:32.830076  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:32.869811  959882 cri.go:89] found id: ""
	I0308 04:18:32.869839  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.869851  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:32.869859  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:32.869927  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:32.907562  959882 cri.go:89] found id: ""
	I0308 04:18:32.907593  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.907605  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:32.907614  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:32.907681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:32.945690  959882 cri.go:89] found id: ""
	I0308 04:18:32.945723  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.945734  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:32.945742  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:32.945811  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:32.985917  959882 cri.go:89] found id: ""
	I0308 04:18:32.985953  959882 logs.go:276] 0 containers: []
	W0308 04:18:32.985964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:32.985970  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:32.986031  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:33.026274  959882 cri.go:89] found id: ""
	I0308 04:18:33.026304  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.026316  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:33.026323  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:33.026386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:33.068026  959882 cri.go:89] found id: ""
	I0308 04:18:33.068059  959882 logs.go:276] 0 containers: []
	W0308 04:18:33.068072  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:33.068084  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:33.068103  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:33.118340  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:33.118378  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:33.172606  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:33.172645  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:33.190169  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:33.190199  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:33.272561  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:33.272590  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:33.272609  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:33.035330  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.530004  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:34.341744  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.344748  959419 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:36.836085  959419 pod_ready.go:81] duration metric: took 4m0.001021321s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:36.836121  959419 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qnq74" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:18:36.836158  959419 pod_ready.go:38] duration metric: took 4m12.553235197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:36.836217  959419 kubeadm.go:591] duration metric: took 4m20.149646521s to restartPrimaryControlPlane
	W0308 04:18:36.836310  959419 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:18:36.836356  959419 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:18:34.573568  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:37.074131  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:35.852974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:35.866693  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:35.866752  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:35.908451  959882 cri.go:89] found id: ""
	I0308 04:18:35.908475  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.908484  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:35.908491  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:35.908551  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:35.955021  959882 cri.go:89] found id: ""
	I0308 04:18:35.955051  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.955060  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:35.955066  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:35.955128  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:35.996771  959882 cri.go:89] found id: ""
	I0308 04:18:35.996803  959882 logs.go:276] 0 containers: []
	W0308 04:18:35.996816  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:35.996824  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:35.996898  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:36.044099  959882 cri.go:89] found id: ""
	I0308 04:18:36.044128  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.044139  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:36.044147  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:36.044214  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:36.086034  959882 cri.go:89] found id: ""
	I0308 04:18:36.086060  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.086067  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:36.086073  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:36.086120  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:36.123317  959882 cri.go:89] found id: ""
	I0308 04:18:36.123345  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.123354  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:36.123360  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:36.123421  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:36.159481  959882 cri.go:89] found id: ""
	I0308 04:18:36.159510  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.159521  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:36.159532  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:36.159593  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:36.196836  959882 cri.go:89] found id: ""
	I0308 04:18:36.196872  959882 logs.go:276] 0 containers: []
	W0308 04:18:36.196885  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:36.196898  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:36.196918  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:36.275042  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:36.275067  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:36.275086  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:36.359925  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:36.359956  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:36.403773  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:36.403809  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:36.460900  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:36.460938  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:38.978539  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:38.992702  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:38.992800  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:39.032467  959882 cri.go:89] found id: ""
	I0308 04:18:39.032498  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.032509  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:39.032516  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:39.032586  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:39.079747  959882 cri.go:89] found id: ""
	I0308 04:18:39.079777  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.079788  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:39.079796  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:39.079864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:39.122361  959882 cri.go:89] found id: ""
	I0308 04:18:39.122394  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.122419  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:39.122428  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:39.122508  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:37.530906  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.532410  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:42.032098  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.074725  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:41.573530  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:39.160158  959882 cri.go:89] found id: ""
	I0308 04:18:39.160184  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.160192  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:39.160198  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:39.160255  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:39.196716  959882 cri.go:89] found id: ""
	I0308 04:18:39.196746  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.196758  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:39.196766  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:39.196838  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:39.242787  959882 cri.go:89] found id: ""
	I0308 04:18:39.242817  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.242826  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:39.242832  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:39.242891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:39.284235  959882 cri.go:89] found id: ""
	I0308 04:18:39.284264  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.284273  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:39.284279  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:39.284349  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:39.327872  959882 cri.go:89] found id: ""
	I0308 04:18:39.327905  959882 logs.go:276] 0 containers: []
	W0308 04:18:39.327917  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:39.327936  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:39.327955  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:39.410662  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:39.410703  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:39.458808  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:39.458846  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:39.513143  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:39.513179  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:39.530778  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:39.530811  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:39.615093  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.116182  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:42.129822  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:42.129906  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:42.174417  959882 cri.go:89] found id: ""
	I0308 04:18:42.174448  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.174457  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:42.174463  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:42.174528  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:42.215371  959882 cri.go:89] found id: ""
	I0308 04:18:42.215410  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.215422  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:42.215430  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:42.215518  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:42.265403  959882 cri.go:89] found id: ""
	I0308 04:18:42.265463  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.265478  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:42.265488  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:42.265565  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:42.309537  959882 cri.go:89] found id: ""
	I0308 04:18:42.309568  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.309587  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:42.309597  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:42.309666  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:42.346576  959882 cri.go:89] found id: ""
	I0308 04:18:42.346609  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.346618  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:42.346625  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:42.346681  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:42.386229  959882 cri.go:89] found id: ""
	I0308 04:18:42.386261  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.386287  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:42.386295  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:42.386367  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:42.423960  959882 cri.go:89] found id: ""
	I0308 04:18:42.423991  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.424001  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:42.424008  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:42.424080  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:42.460346  959882 cri.go:89] found id: ""
	I0308 04:18:42.460382  959882 logs.go:276] 0 containers: []
	W0308 04:18:42.460393  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:42.460406  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:42.460424  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:42.512675  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:42.512709  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:42.529748  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:42.529776  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:42.612194  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:42.612217  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:42.612233  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:42.702819  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:42.702864  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:44.529816  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.534668  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:44.072628  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:46.573371  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:45.245974  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:45.259948  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:45.260042  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:45.303892  959882 cri.go:89] found id: ""
	I0308 04:18:45.303928  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.303941  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:45.303950  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:45.304021  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:45.342248  959882 cri.go:89] found id: ""
	I0308 04:18:45.342281  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.342292  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:45.342300  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:45.342370  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:45.387140  959882 cri.go:89] found id: ""
	I0308 04:18:45.387163  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.387171  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:45.387178  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:45.387239  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:45.423062  959882 cri.go:89] found id: ""
	I0308 04:18:45.423097  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.423108  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:45.423116  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:45.423188  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:45.464464  959882 cri.go:89] found id: ""
	I0308 04:18:45.464496  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.464506  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:45.464514  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:45.464583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:45.505684  959882 cri.go:89] found id: ""
	I0308 04:18:45.505715  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.505724  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:45.505731  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:45.505782  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:45.548143  959882 cri.go:89] found id: ""
	I0308 04:18:45.548171  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.548179  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:45.548185  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:45.548258  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:45.588984  959882 cri.go:89] found id: ""
	I0308 04:18:45.589013  959882 logs.go:276] 0 containers: []
	W0308 04:18:45.589023  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:45.589035  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:45.589051  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:45.630896  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:45.630936  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:45.687796  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:45.687832  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:45.706146  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:45.706178  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:45.786428  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:45.786457  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:45.786474  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.370213  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:48.384559  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:48.384649  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:48.420452  959882 cri.go:89] found id: ""
	I0308 04:18:48.420475  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.420483  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:48.420489  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:48.420558  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:48.457346  959882 cri.go:89] found id: ""
	I0308 04:18:48.457377  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.457388  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:48.457396  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:48.457459  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:48.493188  959882 cri.go:89] found id: ""
	I0308 04:18:48.493222  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.493235  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:48.493242  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:48.493324  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:48.533147  959882 cri.go:89] found id: ""
	I0308 04:18:48.533177  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.533187  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:48.533195  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:48.533282  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:48.574279  959882 cri.go:89] found id: ""
	I0308 04:18:48.574305  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.574316  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:48.574325  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:48.574396  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:48.612854  959882 cri.go:89] found id: ""
	I0308 04:18:48.612895  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.612908  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:48.612917  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:48.612992  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:48.650900  959882 cri.go:89] found id: ""
	I0308 04:18:48.650936  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.650950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:48.650957  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:48.651023  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:48.687457  959882 cri.go:89] found id: ""
	I0308 04:18:48.687490  959882 logs.go:276] 0 containers: []
	W0308 04:18:48.687502  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:48.687514  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:48.687532  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:48.741559  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:48.741594  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:48.757826  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:48.757867  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:48.835308  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:48.835333  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:48.835352  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:48.920952  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:48.920992  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:49.030505  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.531220  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:48.573752  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.072677  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:53.072977  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:51.465604  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:51.480785  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:51.480864  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:51.522108  959882 cri.go:89] found id: ""
	I0308 04:18:51.522138  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.522151  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:51.522160  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:51.522240  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:51.568586  959882 cri.go:89] found id: ""
	I0308 04:18:51.568631  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.568642  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:51.568649  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:51.568702  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:51.609134  959882 cri.go:89] found id: ""
	I0308 04:18:51.609157  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.609176  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:51.609182  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:51.609234  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:51.650570  959882 cri.go:89] found id: ""
	I0308 04:18:51.650596  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.650606  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:51.650613  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:51.650669  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:51.689043  959882 cri.go:89] found id: ""
	I0308 04:18:51.689068  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.689077  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:51.689082  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:51.689148  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:51.724035  959882 cri.go:89] found id: ""
	I0308 04:18:51.724059  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.724068  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:51.724074  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:51.724130  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:51.762945  959882 cri.go:89] found id: ""
	I0308 04:18:51.762976  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.762987  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:51.762996  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:51.763062  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:51.804502  959882 cri.go:89] found id: ""
	I0308 04:18:51.804538  959882 logs.go:276] 0 containers: []
	W0308 04:18:51.804548  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:51.804559  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:51.804574  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:51.886747  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:51.886767  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:51.886783  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:51.968489  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:51.968531  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:52.014102  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:52.014139  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:52.090338  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:52.090373  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:54.029249  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:56.029394  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:55.572003  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:57.572068  959713 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:54.606317  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:54.624907  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:54.624986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:54.664808  959882 cri.go:89] found id: ""
	I0308 04:18:54.664838  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.664847  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:54.664853  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:54.664909  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:54.708980  959882 cri.go:89] found id: ""
	I0308 04:18:54.709009  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.709020  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:54.709032  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:54.709106  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:54.742072  959882 cri.go:89] found id: ""
	I0308 04:18:54.742102  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.742114  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:54.742122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:54.742184  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:54.777042  959882 cri.go:89] found id: ""
	I0308 04:18:54.777069  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.777077  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:54.777084  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:54.777146  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:54.815926  959882 cri.go:89] found id: ""
	I0308 04:18:54.815956  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.815966  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:54.815972  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:54.816045  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:54.854797  959882 cri.go:89] found id: ""
	I0308 04:18:54.854822  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.854831  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:54.854839  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:54.854891  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:54.895915  959882 cri.go:89] found id: ""
	I0308 04:18:54.895941  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.895950  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:54.895955  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:54.896007  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:54.934291  959882 cri.go:89] found id: ""
	I0308 04:18:54.934320  959882 logs.go:276] 0 containers: []
	W0308 04:18:54.934329  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:54.934338  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:54.934353  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:54.977691  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:54.977725  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:55.031957  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:55.031990  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:55.048604  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:55.048641  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:55.130497  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:55.130525  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:55.130542  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:57.714882  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:18:57.729812  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:57.729890  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:57.793388  959882 cri.go:89] found id: ""
	I0308 04:18:57.793476  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.793502  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:18:57.793515  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:57.793583  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:57.841783  959882 cri.go:89] found id: ""
	I0308 04:18:57.841812  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.841820  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:18:57.841827  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:57.841893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:57.884709  959882 cri.go:89] found id: ""
	I0308 04:18:57.884742  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.884753  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:18:57.884762  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:57.884834  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:57.923563  959882 cri.go:89] found id: ""
	I0308 04:18:57.923598  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.923610  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:18:57.923619  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:57.923697  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:57.959822  959882 cri.go:89] found id: ""
	I0308 04:18:57.959847  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.959855  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:18:57.959861  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:57.959918  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:57.999923  959882 cri.go:89] found id: ""
	I0308 04:18:57.999951  959882 logs.go:276] 0 containers: []
	W0308 04:18:57.999964  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:18:57.999973  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.000041  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.044975  959882 cri.go:89] found id: ""
	I0308 04:18:58.045007  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.045018  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.045027  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:18:58.045092  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:18:58.091659  959882 cri.go:89] found id: ""
	I0308 04:18:58.091697  959882 logs.go:276] 0 containers: []
	W0308 04:18:58.091710  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:18:58.091723  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:58.091740  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:58.160714  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.160753  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.176991  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.177050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:18:58.256178  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:18:58.256205  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:58.256222  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:58.337429  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:18:58.337466  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:58.032674  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:00.530921  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:18:58.565584  959713 pod_ready.go:81] duration metric: took 4m0.000584369s for pod "metrics-server-57f55c9bc5-ljb42" in "kube-system" namespace to be "Ready" ...
	E0308 04:18:58.565615  959713 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0308 04:18:58.565625  959713 pod_ready.go:38] duration metric: took 4m3.200982055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:18:58.565664  959713 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:18:58.565708  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:18:58.565763  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:18:58.623974  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:18:58.624002  959713 cri.go:89] found id: ""
	I0308 04:18:58.624012  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:18:58.624110  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.629356  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:18:58.629429  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:18:58.674703  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:58.674735  959713 cri.go:89] found id: ""
	I0308 04:18:58.674745  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:18:58.674809  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.679747  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:18:58.679810  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:18:58.723391  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:58.723424  959713 cri.go:89] found id: ""
	I0308 04:18:58.723435  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:18:58.723499  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.728904  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:18:58.728979  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:18:58.778606  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:18:58.778640  959713 cri.go:89] found id: ""
	I0308 04:18:58.778656  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:18:58.778724  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.783451  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:18:58.783511  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:18:58.835734  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:18:58.835759  959713 cri.go:89] found id: ""
	I0308 04:18:58.835766  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:18:58.835817  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.841005  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:18:58.841076  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:18:58.884738  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:58.884770  959713 cri.go:89] found id: ""
	I0308 04:18:58.884780  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:18:58.884850  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.890582  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:18:58.890656  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:18:58.929933  959713 cri.go:89] found id: ""
	I0308 04:18:58.929958  959713 logs.go:276] 0 containers: []
	W0308 04:18:58.929967  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:18:58.929973  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:18:58.930043  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:18:58.970118  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:18:58.970147  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:18:58.970152  959713 cri.go:89] found id: ""
	I0308 04:18:58.970160  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:18:58.970214  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.975223  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:18:58.979539  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:18:58.979557  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:18:58.995549  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:18:58.995579  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:18:59.177694  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:18:59.177723  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:18:59.226497  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:18:59.226529  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:18:59.269649  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:18:59.269678  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:18:59.322616  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:18:59.322649  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:18:59.872092  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:18:59.872148  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:18:59.922184  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:18:59.922218  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:18:59.983423  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:18:59.983460  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:00.037572  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:00.037604  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:00.084283  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:00.084320  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:00.125199  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:00.125240  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:00.172572  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:00.172615  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:02.714484  959713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:02.731757  959713 api_server.go:72] duration metric: took 4m15.107182338s to wait for apiserver process to appear ...
	I0308 04:19:02.731789  959713 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:02.731839  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:02.731897  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:02.770700  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:02.770722  959713 cri.go:89] found id: ""
	I0308 04:19:02.770733  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:02.770803  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.775617  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:02.775685  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:02.813955  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:02.813979  959713 cri.go:89] found id: ""
	I0308 04:19:02.813989  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:02.814051  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.818304  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:02.818359  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:02.870377  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:02.870405  959713 cri.go:89] found id: ""
	I0308 04:19:02.870416  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:02.870479  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.877180  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:02.877243  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:02.922793  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:02.922821  959713 cri.go:89] found id: ""
	I0308 04:19:02.922831  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:02.922898  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.927921  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:02.927993  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:02.970081  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:02.970123  959713 cri.go:89] found id: ""
	I0308 04:19:02.970137  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:02.970200  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:02.975064  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:02.975137  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:03.017419  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:03.017442  959713 cri.go:89] found id: ""
	I0308 04:19:03.017450  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:03.017528  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.024697  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:03.024778  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:03.078340  959713 cri.go:89] found id: ""
	I0308 04:19:03.078370  959713 logs.go:276] 0 containers: []
	W0308 04:19:03.078382  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:03.078390  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:03.078461  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:03.130317  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:03.130347  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.130353  959713 cri.go:89] found id: ""
	I0308 04:19:03.130363  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:03.130419  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.135692  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:03.140277  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:03.140298  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:03.155969  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:03.156005  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:03.282583  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:03.282626  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:00.885660  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:00.900483  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:00.900559  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:00.942042  959882 cri.go:89] found id: ""
	I0308 04:19:00.942075  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.942086  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:00.942095  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:00.942168  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:00.980127  959882 cri.go:89] found id: ""
	I0308 04:19:00.980160  959882 logs.go:276] 0 containers: []
	W0308 04:19:00.980169  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:00.980183  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:00.980247  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:01.019049  959882 cri.go:89] found id: ""
	I0308 04:19:01.019078  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.019090  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:01.019099  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:01.019164  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:01.063647  959882 cri.go:89] found id: ""
	I0308 04:19:01.063677  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.063689  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:01.063697  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:01.063762  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:01.103655  959882 cri.go:89] found id: ""
	I0308 04:19:01.103681  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.103691  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:01.103698  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:01.103764  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:01.144831  959882 cri.go:89] found id: ""
	I0308 04:19:01.144855  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.144863  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:01.144869  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:01.144929  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:01.184204  959882 cri.go:89] found id: ""
	I0308 04:19:01.184231  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.184241  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:01.184247  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:01.184296  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:01.221851  959882 cri.go:89] found id: ""
	I0308 04:19:01.221876  959882 logs.go:276] 0 containers: []
	W0308 04:19:01.221886  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:01.221899  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:01.221917  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:01.300161  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:01.300202  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:01.343554  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:01.343585  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:01.400927  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:01.400960  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:01.416018  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:01.416050  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:01.489986  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:03.990800  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:04.005571  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:04.005655  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:04.052263  959882 cri.go:89] found id: ""
	I0308 04:19:04.052293  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.052302  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:19:04.052309  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:04.052386  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:04.099911  959882 cri.go:89] found id: ""
	I0308 04:19:04.099944  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.099959  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:19:04.099967  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:04.100037  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:03.031020  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:05.034036  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:07.036338  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:03.330755  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:03.330787  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:03.382044  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:03.382082  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:03.843167  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:03.843215  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:03.888954  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:03.888994  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:03.934727  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:03.934757  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:03.988799  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:03.988833  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:04.054979  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:04.055013  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:04.121637  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:04.121671  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:04.180422  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:04.180463  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:04.247389  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:04.247421  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:06.801386  959713 api_server.go:253] Checking apiserver healthz at https://192.168.61.32:8444/healthz ...
	I0308 04:19:06.806575  959713 api_server.go:279] https://192.168.61.32:8444/healthz returned 200:
	ok
	I0308 04:19:06.808121  959713 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:06.808142  959713 api_server.go:131] duration metric: took 4.076344885s to wait for apiserver health ...
	I0308 04:19:06.808149  959713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:06.808177  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:19:06.808232  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:19:06.854313  959713 cri.go:89] found id: "bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:06.854336  959713 cri.go:89] found id: ""
	I0308 04:19:06.854344  959713 logs.go:276] 1 containers: [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c]
	I0308 04:19:06.854393  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.859042  959713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:19:06.859103  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:19:06.899497  959713 cri.go:89] found id: "811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:06.899519  959713 cri.go:89] found id: ""
	I0308 04:19:06.899526  959713 logs.go:276] 1 containers: [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7]
	I0308 04:19:06.899578  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.904327  959713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:19:06.904401  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:19:06.941154  959713 cri.go:89] found id: "8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:06.941180  959713 cri.go:89] found id: ""
	I0308 04:19:06.941190  959713 logs.go:276] 1 containers: [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370]
	I0308 04:19:06.941256  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.945817  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:06.945868  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:06.988371  959713 cri.go:89] found id: "c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:06.988401  959713 cri.go:89] found id: ""
	I0308 04:19:06.988411  959713 logs.go:276] 1 containers: [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f]
	I0308 04:19:06.988477  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:06.992981  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:06.993046  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:07.034905  959713 cri.go:89] found id: "f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:07.034931  959713 cri.go:89] found id: ""
	I0308 04:19:07.034940  959713 logs.go:276] 1 containers: [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963]
	I0308 04:19:07.035007  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.042849  959713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:07.042927  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:07.081657  959713 cri.go:89] found id: "0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:07.081682  959713 cri.go:89] found id: ""
	I0308 04:19:07.081691  959713 logs.go:276] 1 containers: [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6]
	I0308 04:19:07.081742  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.086101  959713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:07.086157  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:07.122717  959713 cri.go:89] found id: ""
	I0308 04:19:07.122746  959713 logs.go:276] 0 containers: []
	W0308 04:19:07.122754  959713 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:07.122760  959713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0308 04:19:07.122814  959713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0308 04:19:07.165383  959713 cri.go:89] found id: "c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.165408  959713 cri.go:89] found id: "0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:07.165420  959713 cri.go:89] found id: ""
	I0308 04:19:07.165429  959713 logs.go:276] 2 containers: [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef]
	I0308 04:19:07.165478  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.170786  959713 ssh_runner.go:195] Run: which crictl
	I0308 04:19:07.175364  959713 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:07.175388  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.257412  959713 logs.go:123] Gathering logs for kube-scheduler [c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f] ...
	I0308 04:19:07.257450  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c935f4cc994f045c71ef8809e33ac7c5abf667208924396272836b0b938ed81f"
	I0308 04:19:07.298745  959713 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:07.298778  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:07.734747  959713 logs.go:123] Gathering logs for container status ...
	I0308 04:19:07.734792  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:07.782922  959713 logs.go:123] Gathering logs for storage-provisioner [c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be] ...
	I0308 04:19:07.782955  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30a2f482790158b80eeee56220c0f5d420c957372c0b12fb6e7778d9a5e98be"
	I0308 04:19:07.823451  959713 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:07.823485  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:07.837911  959713 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:07.837943  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0308 04:19:07.963821  959713 logs.go:123] Gathering logs for kube-apiserver [bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c] ...
	I0308 04:19:07.963872  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3188fde807f738da2471fae31b45b0766ea84058b269e80bf14d8d9095272c"
	I0308 04:19:08.011570  959713 logs.go:123] Gathering logs for etcd [811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7] ...
	I0308 04:19:08.011605  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 811f83f4d25b2545fb5f14d07d8c382fcc2c327fcd646ddbade7d562e99dc1d7"
	I0308 04:19:08.077712  959713 logs.go:123] Gathering logs for coredns [8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370] ...
	I0308 04:19:08.077747  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ce12798e302be23ef77c840c268a1888c2d8743b260e09eade8088ebfdc2370"
	I0308 04:19:08.116682  959713 logs.go:123] Gathering logs for kube-proxy [f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963] ...
	I0308 04:19:08.116711  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f153fe3d844da4bb2b325e162186558e4458dce2299233398e08411167da0963"
	I0308 04:19:08.160912  959713 logs.go:123] Gathering logs for kube-controller-manager [0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6] ...
	I0308 04:19:08.160942  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f0b6de5c1ff3559e967edf7595bf2bd4f7af68516a771eb3e34799c543111a6"
	I0308 04:19:08.218123  959713 logs.go:123] Gathering logs for storage-provisioner [0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef] ...
	I0308 04:19:08.218160  959713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0db38a5fe1838f00e33414d38f737bb679751bc74b33a452fbfb297bcbd376ef"
	I0308 04:19:04.150850  959882 cri.go:89] found id: ""
	I0308 04:19:04.150875  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.150883  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:19:04.150892  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:19:04.150957  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:19:04.197770  959882 cri.go:89] found id: ""
	I0308 04:19:04.197805  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.197817  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:19:04.197825  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:19:04.197893  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:19:04.242902  959882 cri.go:89] found id: ""
	I0308 04:19:04.242931  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.242939  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:19:04.242946  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:19:04.243010  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:19:04.284302  959882 cri.go:89] found id: ""
	I0308 04:19:04.284334  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.284343  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:19:04.284350  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:19:04.284412  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:19:04.324392  959882 cri.go:89] found id: ""
	I0308 04:19:04.324431  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.324442  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:19:04.324451  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:19:04.324519  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:19:04.362667  959882 cri.go:89] found id: ""
	I0308 04:19:04.362699  959882 logs.go:276] 0 containers: []
	W0308 04:19:04.362711  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:19:04.362725  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:19:04.362743  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0308 04:19:04.377730  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:19:04.377759  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:19:04.447739  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:19:04.447768  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:19:04.447787  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:19:04.545720  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:19:04.545756  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:19:04.595378  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:19:04.595407  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:19:07.150314  959882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:07.164846  959882 kubeadm.go:591] duration metric: took 4m3.382652936s to restartPrimaryControlPlane
	W0308 04:19:07.164921  959882 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:07.164953  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:09.263923  959419 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.427534863s)
	I0308 04:19:09.264018  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.280767  959419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.292937  959419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.305111  959419 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.305127  959419 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.305165  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.316268  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.316332  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.327332  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.338073  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.338126  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.348046  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.358486  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.358524  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.369105  959419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.379317  959419 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.379365  959419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.390684  959419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.452585  959419 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 04:19:09.452654  959419 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:09.627872  959419 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:09.628016  959419 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:09.628131  959419 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:09.895042  959419 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:09.666002  959882 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.501017775s)
	I0308 04:19:09.666079  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:09.682304  959882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:19:09.693957  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:19:09.706423  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:19:09.706456  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:19:09.706506  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:19:09.717661  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:19:09.717732  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:19:09.730502  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:19:09.744384  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:19:09.744445  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:19:09.758493  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.770465  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:19:09.770529  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:19:09.782859  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:19:09.795084  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:19:09.795144  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:19:09.807496  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:19:09.885636  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:19:09.885756  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:19:10.048648  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:19:10.048837  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:19:10.048973  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:19:10.255078  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:19:10.770901  959713 system_pods.go:59] 8 kube-system pods found
	I0308 04:19:10.770938  959713 system_pods.go:61] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.770944  959713 system_pods.go:61] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.770949  959713 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.770956  959713 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.770961  959713 system_pods.go:61] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.770966  959713 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.770974  959713 system_pods.go:61] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.770982  959713 system_pods.go:61] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.770993  959713 system_pods.go:74] duration metric: took 3.962836216s to wait for pod list to return data ...
	I0308 04:19:10.771003  959713 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:10.773653  959713 default_sa.go:45] found service account: "default"
	I0308 04:19:10.773682  959713 default_sa.go:55] duration metric: took 2.66064ms for default service account to be created ...
	I0308 04:19:10.773694  959713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:10.779430  959713 system_pods.go:86] 8 kube-system pods found
	I0308 04:19:10.779453  959713 system_pods.go:89] "coredns-5dd5756b68-xqqds" [497e3ac1-3541-43bc-b138-1a47d7085161] Running
	I0308 04:19:10.779459  959713 system_pods.go:89] "etcd-default-k8s-diff-port-968261" [44a81ed5-1afc-4f82-9c4d-077634885d9d] Running
	I0308 04:19:10.779464  959713 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-968261" [95d5afc2-a72f-4016-ab07-016f6b8f9c63] Running
	I0308 04:19:10.779470  959713 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-968261" [45611485-37ca-45e9-ae2b-5ee90caba66a] Running
	I0308 04:19:10.779474  959713 system_pods.go:89] "kube-proxy-qpxcp" [2ece55d5-ea70-4be7-91c1-b1ac4fbf3def] Running
	I0308 04:19:10.779479  959713 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-968261" [b64fe798-eca6-40f0-8f42-372fdb8a445e] Running
	I0308 04:19:10.779485  959713 system_pods.go:89] "metrics-server-57f55c9bc5-ljb42" [94d8d406-0ea5-4ab7-86ef-e8284c83f810] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:10.779490  959713 system_pods.go:89] "storage-provisioner" [ef2af524-805e-4b03-b57d-52e11b4c4344] Running
	I0308 04:19:10.779499  959713 system_pods.go:126] duration metric: took 5.798633ms to wait for k8s-apps to be running ...
	I0308 04:19:10.779507  959713 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:10.779586  959713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:10.798046  959713 system_svc.go:56] duration metric: took 18.529379ms WaitForService to wait for kubelet
	I0308 04:19:10.798074  959713 kubeadm.go:576] duration metric: took 4m23.173507169s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:10.798130  959713 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:10.801196  959713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:10.801222  959713 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:10.801238  959713 node_conditions.go:105] duration metric: took 3.098276ms to run NodePressure ...
	I0308 04:19:10.801253  959713 start.go:240] waiting for startup goroutines ...
	I0308 04:19:10.801263  959713 start.go:245] waiting for cluster config update ...
	I0308 04:19:10.801318  959713 start.go:254] writing updated cluster config ...
	I0308 04:19:10.801769  959713 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:10.859440  959713 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:10.861533  959713 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-968261" cluster and "default" namespace by default
	I0308 04:19:09.897122  959419 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:09.897235  959419 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:09.897358  959419 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:09.897503  959419 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:09.897617  959419 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:09.898013  959419 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:09.898518  959419 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:09.899039  959419 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:09.899557  959419 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:09.900187  959419 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:09.900656  959419 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:09.901090  959419 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:09.901174  959419 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.252426  959419 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.578032  959419 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.752533  959419 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:10.985702  959419 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:10.986784  959419 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:10.990677  959419 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:10.258203  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:19:10.258314  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:19:10.258400  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:19:10.258516  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:19:10.258593  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:19:10.258705  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:19:10.258810  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:19:10.258902  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:19:10.259003  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:19:10.259126  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:19:10.259259  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:19:10.259317  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:19:10.259407  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:19:10.402036  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:19:10.651837  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:19:10.744762  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:19:11.013528  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:19:11.039895  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.041229  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.041325  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.218109  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:19:09.532563  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:12.029006  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:10.992549  959419 out.go:204]   - Booting up control plane ...
	I0308 04:19:10.992635  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:10.992764  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:10.993227  959419 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.018730  959419 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:19:11.020605  959419 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:19:11.020750  959419 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:19:11.193962  959419 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:11.219878  959882 out.go:204]   - Booting up control plane ...
	I0308 04:19:11.220026  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:19:11.236570  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:19:11.238303  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:19:11.239599  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:19:11.241861  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:19:14.029853  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:16.035938  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:17.198808  959419 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004579 seconds
	I0308 04:19:17.198946  959419 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:19:17.213163  959419 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:19:17.744322  959419 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:19:17.744588  959419 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-416634 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:19:18.262333  959419 kubeadm.go:309] [bootstrap-token] Using token: fqg0lg.ggyvjkvt5f0c58m0
	I0308 04:19:18.263754  959419 out.go:204]   - Configuring RBAC rules ...
	I0308 04:19:18.263925  959419 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:19:18.270393  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:19:18.278952  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:19:18.285381  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:19:18.289295  959419 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:19:18.293080  959419 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:19:18.307380  959419 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:19:18.587578  959419 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:19:18.677524  959419 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:19:18.677557  959419 kubeadm.go:309] 
	I0308 04:19:18.677675  959419 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:19:18.677701  959419 kubeadm.go:309] 
	I0308 04:19:18.677806  959419 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:19:18.677826  959419 kubeadm.go:309] 
	I0308 04:19:18.677862  959419 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:19:18.677938  959419 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:19:18.678008  959419 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:19:18.678021  959419 kubeadm.go:309] 
	I0308 04:19:18.678082  959419 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:19:18.678089  959419 kubeadm.go:309] 
	I0308 04:19:18.678127  959419 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:19:18.678133  959419 kubeadm.go:309] 
	I0308 04:19:18.678175  959419 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:19:18.678237  959419 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:19:18.678303  959419 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:19:18.678309  959419 kubeadm.go:309] 
	I0308 04:19:18.678376  959419 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:19:18.678441  959419 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:19:18.678447  959419 kubeadm.go:309] 
	I0308 04:19:18.678514  959419 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678637  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:19:18.678660  959419 kubeadm.go:309] 	--control-plane 
	I0308 04:19:18.678665  959419 kubeadm.go:309] 
	I0308 04:19:18.678763  959419 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:19:18.678774  959419 kubeadm.go:309] 
	I0308 04:19:18.678853  959419 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fqg0lg.ggyvjkvt5f0c58m0 \
	I0308 04:19:18.678937  959419 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:19:18.683604  959419 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:19:18.683658  959419 cni.go:84] Creating CNI manager for ""
	I0308 04:19:18.683679  959419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:19:18.685495  959419 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:19:18.529492  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:20.530172  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:18.686954  959419 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:19:18.723595  959419 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:19:18.770910  959419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:19:18.770999  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:18.771040  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-416634 minikube.k8s.io/updated_at=2024_03_08T04_19_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=embed-certs-416634 minikube.k8s.io/primary=true
	I0308 04:19:18.882992  959419 ops.go:34] apiserver oom_adj: -16
	I0308 04:19:19.055036  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:19.555797  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.056061  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:20.555798  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.055645  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:21.555937  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.056038  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.555172  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:22.530650  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:25.029105  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:27.035634  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:23.055514  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:23.555556  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.055689  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:24.555936  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.056059  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:25.555860  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.055733  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:26.555685  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.055131  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:27.555731  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.055812  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:28.555751  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.055294  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:29.555822  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.056034  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.555846  959419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:19:30.654566  959419 kubeadm.go:1106] duration metric: took 11.883640463s to wait for elevateKubeSystemPrivileges
	W0308 04:19:30.654615  959419 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:19:30.654626  959419 kubeadm.go:393] duration metric: took 5m14.030436758s to StartCluster
	I0308 04:19:30.654648  959419 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.654754  959419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:19:30.656685  959419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:19:30.657017  959419 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:19:30.658711  959419 out.go:177] * Verifying Kubernetes components...
	I0308 04:19:30.657165  959419 config.go:182] Loaded profile config "embed-certs-416634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:19:30.657115  959419 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:19:30.660071  959419 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-416634"
	I0308 04:19:30.660097  959419 addons.go:69] Setting default-storageclass=true in profile "embed-certs-416634"
	I0308 04:19:30.660110  959419 addons.go:69] Setting metrics-server=true in profile "embed-certs-416634"
	I0308 04:19:30.660118  959419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:19:30.660127  959419 addons.go:234] Setting addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:30.660136  959419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-416634"
	W0308 04:19:30.660138  959419 addons.go:243] addon metrics-server should already be in state true
	I0308 04:19:30.660101  959419 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-416634"
	W0308 04:19:30.660215  959419 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:19:30.660242  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660200  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660662  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660647  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.660682  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660684  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.660695  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0308 04:19:30.678106  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0308 04:19:30.678888  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.678898  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.679629  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.679657  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680033  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.680092  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0308 04:19:30.680541  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.680562  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.680570  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.680785  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.680814  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.680981  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.681049  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.681072  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.681198  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.681457  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.682105  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.682132  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.685007  959419 addons.go:234] Setting addon default-storageclass=true in "embed-certs-416634"
	W0308 04:19:30.685028  959419 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:19:30.685053  959419 host.go:66] Checking if "embed-certs-416634" exists ...
	I0308 04:19:30.685413  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.685440  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.698369  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0308 04:19:30.698851  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.699312  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.699334  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.699514  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0308 04:19:30.699658  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.699870  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.700095  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.700483  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.700499  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.701052  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.701477  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.701706  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.704251  959419 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:19:30.702864  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.705857  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:19:30.705878  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:19:30.705901  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.707563  959419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:19:29.530298  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:31.531359  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:30.708827  959419 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:30.708845  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:19:30.708862  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.709350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710143  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.710172  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.710282  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0308 04:19:30.710337  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.710527  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.710709  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.710930  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.711085  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.711740  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.711756  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.711964  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712107  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.712326  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.712350  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.712545  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.712678  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.712814  959419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:19:30.712847  959419 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:19:30.713048  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.713220  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.728102  959419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0308 04:19:30.728509  959419 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:19:30.729215  959419 main.go:141] libmachine: Using API Version  1
	I0308 04:19:30.729240  959419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:19:30.729558  959419 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:19:30.729720  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetState
	I0308 04:19:30.730994  959419 main.go:141] libmachine: (embed-certs-416634) Calling .DriverName
	I0308 04:19:30.731285  959419 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:30.731303  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:19:30.731321  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHHostname
	I0308 04:19:30.733957  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734305  959419 main.go:141] libmachine: (embed-certs-416634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:68:e3", ip: ""} in network mk-embed-certs-416634: {Iface:virbr3 ExpiryTime:2024-03-08 05:05:20 +0000 UTC Type:0 Mac:52:54:00:5a:68:e3 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:embed-certs-416634 Clientid:01:52:54:00:5a:68:e3}
	I0308 04:19:30.734398  959419 main.go:141] libmachine: (embed-certs-416634) DBG | domain embed-certs-416634 has defined IP address 192.168.50.137 and MAC address 52:54:00:5a:68:e3 in network mk-embed-certs-416634
	I0308 04:19:30.734561  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHPort
	I0308 04:19:30.734737  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHKeyPath
	I0308 04:19:30.734886  959419 main.go:141] libmachine: (embed-certs-416634) Calling .GetSSHUsername
	I0308 04:19:30.735037  959419 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/embed-certs-416634/id_rsa Username:docker}
	I0308 04:19:30.880938  959419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:19:30.916120  959419 node_ready.go:35] waiting up to 6m0s for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928773  959419 node_ready.go:49] node "embed-certs-416634" has status "Ready":"True"
	I0308 04:19:30.928800  959419 node_ready.go:38] duration metric: took 12.639223ms for node "embed-certs-416634" to be "Ready" ...
	I0308 04:19:30.928809  959419 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:30.935032  959419 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962007  959419 pod_ready.go:92] pod "etcd-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:30.962030  959419 pod_ready.go:81] duration metric: took 26.9702ms for pod "etcd-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.962040  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:30.978720  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:19:31.067889  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:19:31.067923  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:19:31.081722  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:19:31.099175  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:19:31.099205  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:19:31.184411  959419 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.184439  959419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:19:31.255402  959419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:19:31.980910  959419 pod_ready.go:92] pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.980940  959419 pod_ready.go:81] duration metric: took 1.018893136s for pod "kube-apiserver-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.980951  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991889  959419 pod_ready.go:92] pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:31.991914  959419 pod_ready.go:81] duration metric: took 10.956999ms for pod "kube-controller-manager-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:31.991923  959419 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009167  959419 pod_ready.go:92] pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace has status "Ready":"True"
	I0308 04:19:32.009205  959419 pod_ready.go:81] duration metric: took 17.273294ms for pod "kube-scheduler-embed-certs-416634" in "kube-system" namespace to be "Ready" ...
	I0308 04:19:32.009217  959419 pod_ready.go:38] duration metric: took 1.08039715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:32.009238  959419 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:19:32.009327  959419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:19:32.230522  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.251754082s)
	I0308 04:19:32.230594  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.230609  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.230918  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.230978  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.230988  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.230998  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.231010  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.231297  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.231341  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237254  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.237289  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.237557  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.237577  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.237588  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.492739  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410961087s)
	I0308 04:19:32.492795  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.492804  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493183  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493214  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493204  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.493284  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.493303  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.493539  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.493580  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.493580  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.771920  959419 api_server.go:72] duration metric: took 2.114855667s to wait for apiserver process to appear ...
	I0308 04:19:32.771950  959419 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:19:32.771977  959419 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0308 04:19:32.775261  959419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.519808618s)
	I0308 04:19:32.775324  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775342  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.775647  959419 main.go:141] libmachine: (embed-certs-416634) DBG | Closing plugin on server side
	I0308 04:19:32.775712  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.775762  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.775786  959419 main.go:141] libmachine: Making call to close driver server
	I0308 04:19:32.775805  959419 main.go:141] libmachine: (embed-certs-416634) Calling .Close
	I0308 04:19:32.776142  959419 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:19:32.776157  959419 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:19:32.776168  959419 addons.go:470] Verifying addon metrics-server=true in "embed-certs-416634"
	I0308 04:19:32.777770  959419 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0308 04:19:32.778948  959419 addons.go:505] duration metric: took 2.121835726s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0308 04:19:32.786204  959419 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0308 04:19:32.787455  959419 api_server.go:141] control plane version: v1.28.4
	I0308 04:19:32.787476  959419 api_server.go:131] duration metric: took 15.519473ms to wait for apiserver health ...
	I0308 04:19:32.787484  959419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:19:32.793853  959419 system_pods.go:59] 9 kube-system pods found
	I0308 04:19:32.793882  959419 system_pods.go:61] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793892  959419 system_pods.go:61] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.793900  959419 system_pods.go:61] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.793907  959419 system_pods.go:61] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.793914  959419 system_pods.go:61] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.793927  959419 system_pods.go:61] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.793940  959419 system_pods.go:61] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.793950  959419 system_pods.go:61] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.793958  959419 system_pods.go:61] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.793972  959419 system_pods.go:74] duration metric: took 6.479472ms to wait for pod list to return data ...
	I0308 04:19:32.793984  959419 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:19:32.799175  959419 default_sa.go:45] found service account: "default"
	I0308 04:19:32.799199  959419 default_sa.go:55] duration metric: took 5.203464ms for default service account to be created ...
	I0308 04:19:32.799209  959419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:19:32.829367  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:32.829398  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829406  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:32.829412  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:32.829417  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:32.829422  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:32.829430  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:32.829434  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:32.829441  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:32.829447  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:32.829466  959419 retry.go:31] will retry after 306.170242ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.150871  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.150916  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150927  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.150934  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.150940  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.150945  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.150950  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.150954  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.150961  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.150992  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.151013  959419 retry.go:31] will retry after 239.854627ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.418093  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.418129  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418137  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.418145  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.418153  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.418166  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.418181  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.418189  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.418197  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.418203  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.418220  959419 retry.go:31] will retry after 444.153887ms: missing components: kube-dns, kube-proxy
	I0308 04:19:33.871055  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:33.871098  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871111  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 04:19:33.871120  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:33.871128  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:33.871135  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:33.871143  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 04:19:33.871153  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:33.871166  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:33.871180  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0308 04:19:33.871202  959419 retry.go:31] will retry after 470.863205ms: missing components: kube-dns, kube-proxy
	I0308 04:19:34.348946  959419 system_pods.go:86] 9 kube-system pods found
	I0308 04:19:34.348974  959419 system_pods.go:89] "coredns-5dd5756b68-h7p5l" [72be5a70-ece6-4511-bef6-20fe746db41f] Running
	I0308 04:19:34.348980  959419 system_pods.go:89] "coredns-5dd5756b68-t8z94" [6f3d1519-9094-478a-80c5-a9fd11214336] Running
	I0308 04:19:34.348986  959419 system_pods.go:89] "etcd-embed-certs-416634" [5ba8f76c-a2aa-4976-a14c-73ba40778c13] Running
	I0308 04:19:34.348990  959419 system_pods.go:89] "kube-apiserver-embed-certs-416634" [31abe363-3733-4537-99df-3adba5593c63] Running
	I0308 04:19:34.348995  959419 system_pods.go:89] "kube-controller-manager-embed-certs-416634" [61c7fc6d-8e31-45c6-9bac-7d08b9b7bd07] Running
	I0308 04:19:34.348999  959419 system_pods.go:89] "kube-proxy-vc6p9" [8b6e5755-2084-40ef-a128-1f4e04bf1ea6] Running
	I0308 04:19:34.349002  959419 system_pods.go:89] "kube-scheduler-embed-certs-416634" [20816b94-212d-4bc4-a765-dc69466ffe43] Running
	I0308 04:19:34.349008  959419 system_pods.go:89] "metrics-server-57f55c9bc5-kh9vr" [eb205c10-4b89-499f-8cda-adae031e374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:19:34.349016  959419 system_pods.go:89] "storage-provisioner" [8b824332-34d7-477f-9db5-62d7fca45586] Running
	I0308 04:19:34.349025  959419 system_pods.go:126] duration metric: took 1.549809461s to wait for k8s-apps to be running ...
	I0308 04:19:34.349035  959419 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:19:34.349085  959419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:19:34.365870  959419 system_svc.go:56] duration metric: took 16.823853ms WaitForService to wait for kubelet
	I0308 04:19:34.365902  959419 kubeadm.go:576] duration metric: took 3.708843461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:19:34.365928  959419 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:19:34.369109  959419 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:19:34.369133  959419 node_conditions.go:123] node cpu capacity is 2
	I0308 04:19:34.369147  959419 node_conditions.go:105] duration metric: took 3.212316ms to run NodePressure ...
	I0308 04:19:34.369160  959419 start.go:240] waiting for startup goroutines ...
	I0308 04:19:34.369170  959419 start.go:245] waiting for cluster config update ...
	I0308 04:19:34.369184  959419 start.go:254] writing updated cluster config ...
	I0308 04:19:34.369515  959419 ssh_runner.go:195] Run: rm -f paused
	I0308 04:19:34.421356  959419 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 04:19:34.423382  959419 out.go:177] * Done! kubectl is now configured to use "embed-certs-416634" cluster and "default" namespace by default
	I0308 04:19:34.032230  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:36.530769  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:39.031829  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:41.529593  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:43.530797  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:46.031240  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:48.531575  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.030379  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:51.242711  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:19:51.243774  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:51.244023  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:19:53.530474  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:55.530743  959302 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace has status "Ready":"False"
	I0308 04:19:57.023950  959302 pod_ready.go:81] duration metric: took 4m0.001016312s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" ...
	E0308 04:19:57.023982  959302 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-6nb8p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0308 04:19:57.023999  959302 pod_ready.go:38] duration metric: took 4m14.553044455s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:19:57.024028  959302 kubeadm.go:591] duration metric: took 4m22.162760035s to restartPrimaryControlPlane
	W0308 04:19:57.024091  959302 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0308 04:19:57.024121  959302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:19:56.244599  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:19:56.244909  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:06.245088  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:06.245308  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:26.246278  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:20:26.246520  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:20:29.294005  959302 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.269850368s)
	I0308 04:20:29.294088  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:29.314795  959302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 04:20:29.328462  959302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:20:29.339712  959302 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:20:29.339736  959302 kubeadm.go:156] found existing configuration files:
	
	I0308 04:20:29.339787  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:20:29.351684  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:20:29.351749  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:20:29.364351  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:20:29.376474  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:20:29.376537  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:20:29.389156  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.401283  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:20:29.401336  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:20:29.412425  959302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:20:29.422734  959302 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:20:29.422793  959302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:20:29.433399  959302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:20:29.494025  959302 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0308 04:20:29.494143  959302 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:20:29.650051  959302 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:20:29.650223  959302 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:20:29.650395  959302 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:20:29.871576  959302 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:20:29.874416  959302 out.go:204]   - Generating certificates and keys ...
	I0308 04:20:29.874527  959302 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:20:29.874619  959302 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:20:29.874739  959302 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:20:29.875257  959302 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:20:29.875385  959302 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:20:29.875473  959302 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:20:29.875573  959302 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:20:29.875671  959302 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:20:29.875771  959302 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:20:29.875870  959302 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:20:29.875919  959302 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:20:29.876003  959302 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:20:29.958111  959302 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:20:30.196023  959302 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0308 04:20:30.292114  959302 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:20:30.402480  959302 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:20:30.616570  959302 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:20:30.617128  959302 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:20:30.620115  959302 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:20:30.622165  959302 out.go:204]   - Booting up control plane ...
	I0308 04:20:30.622294  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:20:30.623030  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:20:30.623947  959302 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:20:30.642490  959302 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:20:30.643287  959302 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:20:30.643406  959302 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:20:30.777595  959302 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:20:36.780669  959302 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002554 seconds
	I0308 04:20:36.794539  959302 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 04:20:36.821558  959302 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 04:20:37.357533  959302 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 04:20:37.357784  959302 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-477676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 04:20:37.871930  959302 kubeadm.go:309] [bootstrap-token] Using token: e0wj6q.ce6728hjmxrz2x54
	I0308 04:20:37.873443  959302 out.go:204]   - Configuring RBAC rules ...
	I0308 04:20:37.873591  959302 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 04:20:37.878966  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 04:20:37.892267  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 04:20:37.896043  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 04:20:37.899537  959302 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 04:20:37.902971  959302 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 04:20:37.923047  959302 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 04:20:38.178400  959302 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 04:20:38.288564  959302 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 04:20:38.289567  959302 kubeadm.go:309] 
	I0308 04:20:38.289658  959302 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 04:20:38.289668  959302 kubeadm.go:309] 
	I0308 04:20:38.289755  959302 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 04:20:38.289764  959302 kubeadm.go:309] 
	I0308 04:20:38.289816  959302 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 04:20:38.289879  959302 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 04:20:38.289943  959302 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 04:20:38.289952  959302 kubeadm.go:309] 
	I0308 04:20:38.290014  959302 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 04:20:38.290022  959302 kubeadm.go:309] 
	I0308 04:20:38.290090  959302 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 04:20:38.290104  959302 kubeadm.go:309] 
	I0308 04:20:38.290169  959302 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 04:20:38.290294  959302 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 04:20:38.290468  959302 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 04:20:38.290496  959302 kubeadm.go:309] 
	I0308 04:20:38.290566  959302 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 04:20:38.290645  959302 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 04:20:38.290655  959302 kubeadm.go:309] 
	I0308 04:20:38.290761  959302 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.290897  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 \
	I0308 04:20:38.290930  959302 kubeadm.go:309] 	--control-plane 
	I0308 04:20:38.290942  959302 kubeadm.go:309] 
	I0308 04:20:38.291039  959302 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 04:20:38.291060  959302 kubeadm.go:309] 
	I0308 04:20:38.291153  959302 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e0wj6q.ce6728hjmxrz2x54 \
	I0308 04:20:38.291282  959302 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:93ce33634fcd8abc3e976c40c3dd18357ceaa5006246bbf3e1d1285da2231046 
	I0308 04:20:38.294676  959302 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:20:38.294734  959302 cni.go:84] Creating CNI manager for ""
	I0308 04:20:38.294754  959302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 04:20:38.296466  959302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 04:20:38.297745  959302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 04:20:38.334917  959302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 04:20:38.418095  959302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 04:20:38.418187  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:38.418217  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-477676 minikube.k8s.io/updated_at=2024_03_08T04_20_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=455e433ba7e8d9ab4b9063e4f53a142b55799a5b minikube.k8s.io/name=no-preload-477676 minikube.k8s.io/primary=true
	I0308 04:20:38.660723  959302 ops.go:34] apiserver oom_adj: -16
	I0308 04:20:38.660872  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.161425  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:39.661915  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.161095  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:40.661254  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.161862  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:41.661769  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.161879  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:42.661927  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.161913  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:43.661395  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.161307  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:44.661945  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.161518  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:45.661331  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.161714  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:46.661390  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.161464  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:47.661525  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.160966  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:48.661918  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.161334  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:49.661669  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.161739  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:50.661364  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.161161  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.661690  959302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 04:20:51.764084  959302 kubeadm.go:1106] duration metric: took 13.345963984s to wait for elevateKubeSystemPrivileges
	W0308 04:20:51.764134  959302 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 04:20:51.764156  959302 kubeadm.go:393] duration metric: took 5m16.958788194s to StartCluster
	I0308 04:20:51.764205  959302 settings.go:142] acquiring lock: {Name:mkcbd3624d6d8468b0b61f15f70eb3471cb7bc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.764336  959302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:20:51.766388  959302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18333-911675/kubeconfig: {Name:mkecdc5840869d9ffd319e1cb8a7868d63e45388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 04:20:51.766667  959302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0308 04:20:51.768342  959302 out.go:177] * Verifying Kubernetes components...
	I0308 04:20:51.766716  959302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 04:20:51.766897  959302 config.go:182] Loaded profile config "no-preload-477676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:20:51.768412  959302 addons.go:69] Setting storage-provisioner=true in profile "no-preload-477676"
	I0308 04:20:51.769593  959302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 04:20:51.769616  959302 addons.go:234] Setting addon storage-provisioner=true in "no-preload-477676"
	W0308 04:20:51.769629  959302 addons.go:243] addon storage-provisioner should already be in state true
	I0308 04:20:51.769664  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.768418  959302 addons.go:69] Setting default-storageclass=true in profile "no-preload-477676"
	I0308 04:20:51.769732  959302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-477676"
	I0308 04:20:51.768422  959302 addons.go:69] Setting metrics-server=true in profile "no-preload-477676"
	I0308 04:20:51.769798  959302 addons.go:234] Setting addon metrics-server=true in "no-preload-477676"
	W0308 04:20:51.769811  959302 addons.go:243] addon metrics-server should already be in state true
	I0308 04:20:51.769836  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.770113  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770142  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770153  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.770173  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.770181  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.785859  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0308 04:20:51.786074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0308 04:20:51.786424  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.786470  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.787023  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787040  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787196  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.787224  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.787422  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.787632  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.788018  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788051  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.788160  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.788195  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.789324  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0308 04:20:51.789811  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.790319  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.790346  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.790801  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.791020  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.795411  959302 addons.go:234] Setting addon default-storageclass=true in "no-preload-477676"
	W0308 04:20:51.795434  959302 addons.go:243] addon default-storageclass should already be in state true
	I0308 04:20:51.795808  959302 host.go:66] Checking if "no-preload-477676" exists ...
	I0308 04:20:51.796198  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.796229  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.806074  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0308 04:20:51.806518  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.807948  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.807972  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.808228  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0308 04:20:51.808406  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.808631  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.808803  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.809124  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.809148  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.809472  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.809654  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.810970  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.812952  959302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 04:20:51.811652  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.814339  959302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:51.814364  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 04:20:51.814385  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.815552  959302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0308 04:20:51.816733  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0308 04:20:51.816750  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0308 04:20:51.816769  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.817737  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818394  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.818441  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.818589  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.818788  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.819269  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.819461  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.820098  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820326  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.820353  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.820383  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0308 04:20:51.820551  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.820745  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.820838  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.820992  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.821143  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:51.821518  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.821544  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.821927  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.822486  959302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 04:20:51.822532  959302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 04:20:51.837862  959302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0308 04:20:51.838321  959302 main.go:141] libmachine: () Calling .GetVersion
	I0308 04:20:51.838868  959302 main.go:141] libmachine: Using API Version  1
	I0308 04:20:51.838899  959302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 04:20:51.839274  959302 main.go:141] libmachine: () Calling .GetMachineName
	I0308 04:20:51.839488  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetState
	I0308 04:20:51.841382  959302 main.go:141] libmachine: (no-preload-477676) Calling .DriverName
	I0308 04:20:51.841651  959302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:51.841671  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 04:20:51.841689  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHHostname
	I0308 04:20:51.844535  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845056  959302 main.go:141] libmachine: (no-preload-477676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:6f:03", ip: ""} in network mk-no-preload-477676: {Iface:virbr2 ExpiryTime:2024-03-08 05:04:54 +0000 UTC Type:0 Mac:52:54:00:3e:6f:03 Iaid: IPaddr:192.168.72.214 Prefix:24 Hostname:no-preload-477676 Clientid:01:52:54:00:3e:6f:03}
	I0308 04:20:51.845395  959302 main.go:141] libmachine: (no-preload-477676) DBG | domain no-preload-477676 has defined IP address 192.168.72.214 and MAC address 52:54:00:3e:6f:03 in network mk-no-preload-477676
	I0308 04:20:51.845398  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHPort
	I0308 04:20:51.845577  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHKeyPath
	I0308 04:20:51.845722  959302 main.go:141] libmachine: (no-preload-477676) Calling .GetSSHUsername
	I0308 04:20:51.845886  959302 sshutil.go:53] new ssh client: &{IP:192.168.72.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/no-preload-477676/id_rsa Username:docker}
	I0308 04:20:52.005863  959302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 04:20:52.035228  959302 node_ready.go:35] waiting up to 6m0s for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054884  959302 node_ready.go:49] node "no-preload-477676" has status "Ready":"True"
	I0308 04:20:52.054910  959302 node_ready.go:38] duration metric: took 19.648834ms for node "no-preload-477676" to be "Ready" ...
	I0308 04:20:52.054920  959302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:52.063975  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:52.138383  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 04:20:52.167981  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0308 04:20:52.168012  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0308 04:20:52.185473  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 04:20:52.239574  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0308 04:20:52.239611  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0308 04:20:52.284054  959302 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:52.284093  959302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0308 04:20:52.349526  959302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0308 04:20:53.362661  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.177145908s)
	I0308 04:20:53.362739  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.362751  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.362962  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224538741s)
	I0308 04:20:53.363030  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363045  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363077  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363094  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363103  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363110  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363383  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363402  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363437  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363445  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.363463  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363446  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.363474  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.363483  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.363696  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.363710  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400512  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.400550  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.400881  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.400905  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.400914  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.675739  959302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326154891s)
	I0308 04:20:53.675804  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.675821  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676167  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.676216  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676231  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676244  959302 main.go:141] libmachine: Making call to close driver server
	I0308 04:20:53.676254  959302 main.go:141] libmachine: (no-preload-477676) Calling .Close
	I0308 04:20:53.676534  959302 main.go:141] libmachine: Successfully made call to close driver server
	I0308 04:20:53.676555  959302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0308 04:20:53.676567  959302 addons.go:470] Verifying addon metrics-server=true in "no-preload-477676"
	I0308 04:20:53.676534  959302 main.go:141] libmachine: (no-preload-477676) DBG | Closing plugin on server side
	I0308 04:20:53.678300  959302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0308 04:20:53.679648  959302 addons.go:505] duration metric: took 1.912930983s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0308 04:20:54.077863  959302 pod_ready.go:92] pod "coredns-76f75df574-hc8hb" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.077894  959302 pod_ready.go:81] duration metric: took 2.013885079s for pod "coredns-76f75df574-hc8hb" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.077907  959302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088313  959302 pod_ready.go:92] pod "coredns-76f75df574-kj6pn" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.088336  959302 pod_ready.go:81] duration metric: took 10.420755ms for pod "coredns-76f75df574-kj6pn" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.088349  959302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093953  959302 pod_ready.go:92] pod "etcd-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.093978  959302 pod_ready.go:81] duration metric: took 5.618114ms for pod "etcd-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.093989  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098774  959302 pod_ready.go:92] pod "kube-apiserver-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.098801  959302 pod_ready.go:81] duration metric: took 4.803911ms for pod "kube-apiserver-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.098814  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104207  959302 pod_ready.go:92] pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.104232  959302 pod_ready.go:81] duration metric: took 5.404378ms for pod "kube-controller-manager-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.104243  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469479  959302 pod_ready.go:92] pod "kube-proxy-hr99w" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.469504  959302 pod_ready.go:81] duration metric: took 365.252828ms for pod "kube-proxy-hr99w" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.469515  959302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869789  959302 pod_ready.go:92] pod "kube-scheduler-no-preload-477676" in "kube-system" namespace has status "Ready":"True"
	I0308 04:20:54.869815  959302 pod_ready.go:81] duration metric: took 400.294319ms for pod "kube-scheduler-no-preload-477676" in "kube-system" namespace to be "Ready" ...
	I0308 04:20:54.869823  959302 pod_ready.go:38] duration metric: took 2.814892982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 04:20:54.869845  959302 api_server.go:52] waiting for apiserver process to appear ...
	I0308 04:20:54.869912  959302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 04:20:54.887691  959302 api_server.go:72] duration metric: took 3.120974236s to wait for apiserver process to appear ...
	I0308 04:20:54.887718  959302 api_server.go:88] waiting for apiserver healthz status ...
	I0308 04:20:54.887740  959302 api_server.go:253] Checking apiserver healthz at https://192.168.72.214:8443/healthz ...
	I0308 04:20:54.892278  959302 api_server.go:279] https://192.168.72.214:8443/healthz returned 200:
	ok
	I0308 04:20:54.893625  959302 api_server.go:141] control plane version: v1.29.0-rc.2
	I0308 04:20:54.893647  959302 api_server.go:131] duration metric: took 5.922155ms to wait for apiserver health ...
	I0308 04:20:54.893661  959302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 04:20:55.072595  959302 system_pods.go:59] 9 kube-system pods found
	I0308 04:20:55.072628  959302 system_pods.go:61] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.072633  959302 system_pods.go:61] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.072637  959302 system_pods.go:61] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.072640  959302 system_pods.go:61] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.072644  959302 system_pods.go:61] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.072647  959302 system_pods.go:61] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.072649  959302 system_pods.go:61] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.072661  959302 system_pods.go:61] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.072667  959302 system_pods.go:61] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.072678  959302 system_pods.go:74] duration metric: took 179.009824ms to wait for pod list to return data ...
	I0308 04:20:55.072689  959302 default_sa.go:34] waiting for default service account to be created ...
	I0308 04:20:55.268734  959302 default_sa.go:45] found service account: "default"
	I0308 04:20:55.268765  959302 default_sa.go:55] duration metric: took 196.068321ms for default service account to be created ...
	I0308 04:20:55.268778  959302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 04:20:55.472251  959302 system_pods.go:86] 9 kube-system pods found
	I0308 04:20:55.472292  959302 system_pods.go:89] "coredns-76f75df574-hc8hb" [2cfb86dd-0394-453d-92a7-b3c7f500cc5e] Running
	I0308 04:20:55.472301  959302 system_pods.go:89] "coredns-76f75df574-kj6pn" [48ed9c5f-0f19-4fc1-be44-67dc8128f288] Running
	I0308 04:20:55.472308  959302 system_pods.go:89] "etcd-no-preload-477676" [9f162c4c-66e8-4080-af52-7ad95279a936] Running
	I0308 04:20:55.472314  959302 system_pods.go:89] "kube-apiserver-no-preload-477676" [be05b12e-b98c-40d5-a7d2-76ab6592e100] Running
	I0308 04:20:55.472321  959302 system_pods.go:89] "kube-controller-manager-no-preload-477676" [ed2ead43-77b1-4755-8763-960e8c2438a5] Running
	I0308 04:20:55.472330  959302 system_pods.go:89] "kube-proxy-hr99w" [568b12b2-3f01-4846-83fe-9d571ae15863] Running
	I0308 04:20:55.472336  959302 system_pods.go:89] "kube-scheduler-no-preload-477676" [24b3ee1d-a8ce-49b5-b3d0-ddf3c87ded9b] Running
	I0308 04:20:55.472346  959302 system_pods.go:89] "metrics-server-57f55c9bc5-756mf" [3cbcc7ec-83f5-40fa-a95f-e0670eeeb79f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0308 04:20:55.472354  959302 system_pods.go:89] "storage-provisioner" [97f15cad-a6b3-4a16-b8eb-a083fb1f3a9e] Running
	I0308 04:20:55.472366  959302 system_pods.go:126] duration metric: took 203.581049ms to wait for k8s-apps to be running ...
	I0308 04:20:55.472379  959302 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 04:20:55.472438  959302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:20:55.491115  959302 system_svc.go:56] duration metric: took 18.726292ms WaitForService to wait for kubelet
	I0308 04:20:55.491147  959302 kubeadm.go:576] duration metric: took 3.724437919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 04:20:55.491180  959302 node_conditions.go:102] verifying NodePressure condition ...
	I0308 04:20:55.669455  959302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 04:20:55.669489  959302 node_conditions.go:123] node cpu capacity is 2
	I0308 04:20:55.669503  959302 node_conditions.go:105] duration metric: took 178.317276ms to run NodePressure ...
	I0308 04:20:55.669517  959302 start.go:240] waiting for startup goroutines ...
	I0308 04:20:55.669527  959302 start.go:245] waiting for cluster config update ...
	I0308 04:20:55.669543  959302 start.go:254] writing updated cluster config ...
	I0308 04:20:55.669832  959302 ssh_runner.go:195] Run: rm -f paused
	I0308 04:20:55.723845  959302 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0308 04:20:55.726688  959302 out.go:177] * Done! kubectl is now configured to use "no-preload-477676" cluster and "default" namespace by default
	I0308 04:21:06.247770  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:06.248098  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:06.248222  959882 kubeadm.go:309] 
	I0308 04:21:06.248309  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:21:06.248810  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:21:06.248823  959882 kubeadm.go:309] 
	I0308 04:21:06.248852  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:21:06.248881  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:21:06.248973  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:21:06.248997  959882 kubeadm.go:309] 
	I0308 04:21:06.249162  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:21:06.249219  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:21:06.249266  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:21:06.249300  959882 kubeadm.go:309] 
	I0308 04:21:06.249464  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:21:06.249558  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:21:06.249572  959882 kubeadm.go:309] 
	I0308 04:21:06.249682  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:21:06.249760  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:21:06.249878  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:21:06.250294  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:21:06.250305  959882 kubeadm.go:309] 
	I0308 04:21:06.252864  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:21:06.252978  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:21:06.253069  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0308 04:21:06.253230  959882 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0308 04:21:06.253297  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0308 04:21:07.066988  959882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 04:21:07.083058  959882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 04:21:07.096295  959882 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 04:21:07.096320  959882 kubeadm.go:156] found existing configuration files:
	
	I0308 04:21:07.096366  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 04:21:07.106314  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 04:21:07.106373  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 04:21:07.116935  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 04:21:07.127214  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 04:21:07.127268  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 04:21:07.136999  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.146795  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 04:21:07.146845  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 04:21:07.156991  959882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 04:21:07.167082  959882 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 04:21:07.167118  959882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 04:21:07.177269  959882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 04:21:07.259406  959882 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0308 04:21:07.259503  959882 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 04:21:07.421596  959882 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 04:21:07.421733  959882 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 04:21:07.421865  959882 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 04:21:07.620164  959882 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 04:21:07.622782  959882 out.go:204]   - Generating certificates and keys ...
	I0308 04:21:07.622873  959882 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 04:21:07.622960  959882 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 04:21:07.623035  959882 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 04:21:07.623123  959882 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0308 04:21:07.623249  959882 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0308 04:21:07.623341  959882 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0308 04:21:07.623464  959882 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0308 04:21:07.623567  959882 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0308 04:21:07.623681  959882 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 04:21:07.624037  959882 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 04:21:07.624101  959882 kubeadm.go:309] [certs] Using the existing "sa" key
	I0308 04:21:07.624190  959882 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 04:21:07.756619  959882 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 04:21:07.925445  959882 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 04:21:08.008874  959882 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 04:21:08.079536  959882 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 04:21:08.101999  959882 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 04:21:08.102142  959882 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 04:21:08.102219  959882 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 04:21:08.250145  959882 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 04:21:08.251696  959882 out.go:204]   - Booting up control plane ...
	I0308 04:21:08.251831  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 04:21:08.259976  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 04:21:08.260921  959882 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 04:21:08.261777  959882 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 04:21:08.275903  959882 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 04:21:48.278198  959882 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0308 04:21:48.278368  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:48.278642  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:21:53.278992  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:21:53.279173  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:03.279415  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:03.279649  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:22:23.280719  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:22:23.280997  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281431  959882 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0308 04:23:03.281715  959882 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0308 04:23:03.281744  959882 kubeadm.go:309] 
	I0308 04:23:03.281783  959882 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0308 04:23:03.281818  959882 kubeadm.go:309] 		timed out waiting for the condition
	I0308 04:23:03.281825  959882 kubeadm.go:309] 
	I0308 04:23:03.281861  959882 kubeadm.go:309] 	This error is likely caused by:
	I0308 04:23:03.281907  959882 kubeadm.go:309] 		- The kubelet is not running
	I0308 04:23:03.282037  959882 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0308 04:23:03.282046  959882 kubeadm.go:309] 
	I0308 04:23:03.282134  959882 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0308 04:23:03.282197  959882 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0308 04:23:03.282258  959882 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0308 04:23:03.282268  959882 kubeadm.go:309] 
	I0308 04:23:03.282413  959882 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0308 04:23:03.282536  959882 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0308 04:23:03.282550  959882 kubeadm.go:309] 
	I0308 04:23:03.282667  959882 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0308 04:23:03.282750  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0308 04:23:03.282829  959882 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0308 04:23:03.282914  959882 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0308 04:23:03.282926  959882 kubeadm.go:309] 
	I0308 04:23:03.283783  959882 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 04:23:03.283890  959882 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0308 04:23:03.283963  959882 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0308 04:23:03.284068  959882 kubeadm.go:393] duration metric: took 7m59.556147133s to StartCluster
	I0308 04:23:03.284169  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0308 04:23:03.284270  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0308 04:23:03.334879  959882 cri.go:89] found id: ""
	I0308 04:23:03.334904  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.334913  959882 logs.go:278] No container was found matching "kube-apiserver"
	I0308 04:23:03.334920  959882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0308 04:23:03.334986  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0308 04:23:03.375055  959882 cri.go:89] found id: ""
	I0308 04:23:03.375083  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.375091  959882 logs.go:278] No container was found matching "etcd"
	I0308 04:23:03.375097  959882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0308 04:23:03.375161  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0308 04:23:03.423046  959882 cri.go:89] found id: ""
	I0308 04:23:03.423075  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.423086  959882 logs.go:278] No container was found matching "coredns"
	I0308 04:23:03.423093  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0308 04:23:03.423173  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0308 04:23:03.464319  959882 cri.go:89] found id: ""
	I0308 04:23:03.464357  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.464369  959882 logs.go:278] No container was found matching "kube-scheduler"
	I0308 04:23:03.464378  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0308 04:23:03.464443  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0308 04:23:03.510080  959882 cri.go:89] found id: ""
	I0308 04:23:03.510107  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.510116  959882 logs.go:278] No container was found matching "kube-proxy"
	I0308 04:23:03.510122  959882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0308 04:23:03.510201  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0308 04:23:03.573252  959882 cri.go:89] found id: ""
	I0308 04:23:03.573291  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.573300  959882 logs.go:278] No container was found matching "kube-controller-manager"
	I0308 04:23:03.573307  959882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0308 04:23:03.573377  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0308 04:23:03.617263  959882 cri.go:89] found id: ""
	I0308 04:23:03.617310  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.617322  959882 logs.go:278] No container was found matching "kindnet"
	I0308 04:23:03.617330  959882 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0308 04:23:03.617398  959882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0308 04:23:03.656516  959882 cri.go:89] found id: ""
	I0308 04:23:03.656550  959882 logs.go:276] 0 containers: []
	W0308 04:23:03.656562  959882 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0308 04:23:03.656577  959882 logs.go:123] Gathering logs for describe nodes ...
	I0308 04:23:03.656595  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0308 04:23:03.750643  959882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0308 04:23:03.750669  959882 logs.go:123] Gathering logs for CRI-O ...
	I0308 04:23:03.750684  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0308 04:23:03.867974  959882 logs.go:123] Gathering logs for container status ...
	I0308 04:23:03.868013  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0308 04:23:03.921648  959882 logs.go:123] Gathering logs for kubelet ...
	I0308 04:23:03.921691  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0308 04:23:03.972610  959882 logs.go:123] Gathering logs for dmesg ...
	I0308 04:23:03.972642  959882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0308 04:23:03.989987  959882 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0308 04:23:03.990038  959882 out.go:239] * 
	W0308 04:23:03.990131  959882 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.990157  959882 out.go:239] * 
	W0308 04:23:03.991166  959882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0308 04:23:03.994434  959882 out.go:177] 
	W0308 04:23:03.995696  959882 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0308 04:23:03.995755  959882 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0308 04:23:03.995782  959882 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0308 04:23:03.997285  959882 out.go:177] 
	
	
	==> CRI-O <==
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.112328663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872445112300581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bf650f1-fd5c-4f52-9c5e-1ef5f2b88373 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.112837303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a09f4a2a-818e-46d5-842a-7e1caea8be15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.112956581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a09f4a2a-818e-46d5-842a-7e1caea8be15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.113030383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a09f4a2a-818e-46d5-842a-7e1caea8be15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.151166514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b73c5ae-77d8-483d-bb6b-45e70094c525 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.151256034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b73c5ae-77d8-483d-bb6b-45e70094c525 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.152347080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26399129-7427-43c0-a54c-2f51ffbd46af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.152770846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872445152748077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26399129-7427-43c0-a54c-2f51ffbd46af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.153449276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c5cfa4c-dbf1-48d7-a5d4-4e8d44c8fea9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.153528497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c5cfa4c-dbf1-48d7-a5d4-4e8d44c8fea9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.153572939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3c5cfa4c-dbf1-48d7-a5d4-4e8d44c8fea9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.189509147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2081bb19-2ad5-4b8b-8f65-03d7426d1947 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.189592698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2081bb19-2ad5-4b8b-8f65-03d7426d1947 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.190537545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53b9c029-79c3-484e-9d15-e40732e7e19d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.191076928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872445191053812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53b9c029-79c3-484e-9d15-e40732e7e19d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.191671167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e897e4e-7ca6-41d3-9f8f-1fcb1d9fdecc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.191743306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e897e4e-7ca6-41d3-9f8f-1fcb1d9fdecc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.191784123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e897e4e-7ca6-41d3-9f8f-1fcb1d9fdecc name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.228598217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b74885bc-f91a-4222-abf6-8e9ede9cff09 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.228703513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b74885bc-f91a-4222-abf6-8e9ede9cff09 name=/runtime.v1.RuntimeService/Version
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.230164131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4de79863-ef49-4e0c-a5ba-cccbaf47185c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.230614386Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709872445230550986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4de79863-ef49-4e0c-a5ba-cccbaf47185c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.231196852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19bbeefd-96ca-4e10-9ae2-334a62c56c9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.231253433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19bbeefd-96ca-4e10-9ae2-334a62c56c9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 08 04:34:05 old-k8s-version-496808 crio[646]: time="2024-03-08 04:34:05.231290697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19bbeefd-96ca-4e10-9ae2-334a62c56c9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar 8 04:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053945] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.875570] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.587428] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.467385] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.950443] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.070135] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073031] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.179936] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.161996] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.305208] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[Mar 8 04:15] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.072099] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.055797] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +9.463903] kauditd_printk_skb: 46 callbacks suppressed
	[Mar 8 04:19] systemd-fstab-generator[5010]: Ignoring "noauto" option for root device
	[Mar 8 04:21] systemd-fstab-generator[5289]: Ignoring "noauto" option for root device
	[  +0.072080] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 04:34:05 up 19 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux old-k8s-version-496808 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b66fc0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b60510, 0x24, 0x0, ...)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: net.(*Dialer).DialContext(0xc0001b75c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b60510, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000919420, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b60510, 0x24, 0x60, 0x7f5019d272f0, 0x118, ...)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: net/http.(*Transport).dial(0xc000754f00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b60510, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: net/http.(*Transport).dialConn(0xc000754f00, 0x4f7fe00, 0xc000120018, 0x0, 0xc000a48960, 0x5, 0xc000b60510, 0x24, 0x0, 0xc000a4e5a0, ...)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: net/http.(*Transport).dialConnFor(0xc000754f00, 0xc0009a56b0)
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]: created by net/http.(*Transport).queueForDial
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6733]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 08 04:34:02 old-k8s-version-496808 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 08 04:34:02 old-k8s-version-496808 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 08 04:34:02 old-k8s-version-496808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 134.
	Mar 08 04:34:02 old-k8s-version-496808 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 08 04:34:02 old-k8s-version-496808 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6742]: I0308 04:34:02.997206    6742 server.go:416] Version: v1.20.0
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6742]: I0308 04:34:02.997472    6742 server.go:837] Client rotation is on, will bootstrap in background
	Mar 08 04:34:02 old-k8s-version-496808 kubelet[6742]: I0308 04:34:02.999634    6742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 08 04:34:03 old-k8s-version-496808 kubelet[6742]: W0308 04:34:03.000835    6742 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 08 04:34:03 old-k8s-version-496808 kubelet[6742]: I0308 04:34:03.001065    6742 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 2 (270.578086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-496808" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (115.91s)

                                                
                                    

Test pass (249/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 4.62
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.79
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 132.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 145.58
38 TestAddons/parallel/Registry 15.62
40 TestAddons/parallel/InspektorGadget 12.05
41 TestAddons/parallel/MetricsServer 6.81
42 TestAddons/parallel/HelmTiller 12.52
44 TestAddons/parallel/CSI 70.01
45 TestAddons/parallel/Headlamp 14.02
46 TestAddons/parallel/CloudSpanner 5.77
47 TestAddons/parallel/LocalPath 53.61
48 TestAddons/parallel/NvidiaDevicePlugin 5.73
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 98.45
55 TestCertExpiration 283.81
57 TestForceSystemdFlag 59.53
58 TestForceSystemdEnv 69.04
60 TestKVMDriverInstallOrUpdate 1.35
64 TestErrorSpam/setup 44.72
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.62
68 TestErrorSpam/unpause 1.8
69 TestErrorSpam/stop 4.86
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.01
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.03
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
81 TestFunctional/serial/CacheCmd/cache/add_local 1.09
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 33.44
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.66
92 TestFunctional/serial/LogsFileCmd 1.81
93 TestFunctional/serial/InvalidService 4.05
95 TestFunctional/parallel/ConfigCmd 0.46
96 TestFunctional/parallel/DashboardCmd 30.52
97 TestFunctional/parallel/DryRun 0.39
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.03
103 TestFunctional/parallel/ServiceCmdConnect 10.6
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 46.65
107 TestFunctional/parallel/SSHCmd 0.49
108 TestFunctional/parallel/CpCmd 1.36
109 TestFunctional/parallel/MySQL 25.67
110 TestFunctional/parallel/FileSync 0.21
111 TestFunctional/parallel/CertSync 1.67
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
119 TestFunctional/parallel/License 0.15
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.23
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
131 TestFunctional/parallel/ProfileCmd/profile_list 0.4
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
133 TestFunctional/parallel/MountCmd/any-port 5.55
134 TestFunctional/parallel/MountCmd/specific-port 2.15
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
136 TestFunctional/parallel/Version/short 0.18
137 TestFunctional/parallel/Version/components 0.99
138 TestFunctional/parallel/ServiceCmd/List 0.36
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
144 TestFunctional/parallel/ImageCommands/ImageBuild 6.27
145 TestFunctional/parallel/ImageCommands/Setup 1.24
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
147 TestFunctional/parallel/ServiceCmd/Format 0.53
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.11
149 TestFunctional/parallel/ServiceCmd/URL 0.51
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.09
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.05
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMutliControlPlane/serial/StartCluster 225.18
166 TestMutliControlPlane/serial/DeployApp 4.83
167 TestMutliControlPlane/serial/PingHostFromPods 1.44
168 TestMutliControlPlane/serial/AddWorkerNode 44.28
169 TestMutliControlPlane/serial/NodeLabels 0.07
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.58
171 TestMutliControlPlane/serial/CopyFile 13.84
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.52
175 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.43
177 TestMutliControlPlane/serial/DeleteSecondaryNode 17.43
178 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMutliControlPlane/serial/RestartCluster 334.81
181 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.43
182 TestMutliControlPlane/serial/AddSecondaryNode 77.13
183 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
187 TestJSONOutput/start/Command 96.31
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.81
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.67
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.47
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 96.06
219 TestMountStart/serial/StartWithMountFirst 27.83
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 28.99
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.91
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.42
226 TestMountStart/serial/RestartStopped 23.58
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 105.26
231 TestMultiNode/serial/DeployApp2Nodes 3.65
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 39.25
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.65
237 TestMultiNode/serial/StopNode 3.19
238 TestMultiNode/serial/StartAfterStop 27.91
240 TestMultiNode/serial/DeleteNode 2.57
242 TestMultiNode/serial/RestartMultiNode 194.95
243 TestMultiNode/serial/ValidateNameConflict 48.53
250 TestScheduledStopUnix 116.26
254 TestRunningBinaryUpgrade 192.63
258 TestStoppedBinaryUpgrade/Setup 0.51
259 TestStoppedBinaryUpgrade/Upgrade 202.6
268 TestPause/serial/Start 98.37
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
271 TestNoKubernetes/serial/StartWithK8s 44.74
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
274 TestNoKubernetes/serial/StartWithStopK8s 17.34
275 TestNoKubernetes/serial/Start 27.64
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
277 TestNoKubernetes/serial/ProfileList 0.85
278 TestNoKubernetes/serial/Stop 1.42
279 TestNoKubernetes/serial/StartNoArgs 63.95
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
288 TestNetworkPlugins/group/false 3.86
295 TestStartStop/group/no-preload/serial/FirstStart 143.4
297 TestStartStop/group/embed-certs/serial/FirstStart 127.25
298 TestStartStop/group/no-preload/serial/DeployApp 9.34
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.61
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
303 TestStartStop/group/embed-certs/serial/DeployApp 7.31
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
313 TestStartStop/group/no-preload/serial/SecondStart 703.93
314 TestStartStop/group/embed-certs/serial/SecondStart 611.88
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 537.92
317 TestStartStop/group/old-k8s-version/serial/Stop 3.3
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/newest-cni/serial/FirstStart 63.26
330 TestNetworkPlugins/group/auto/Start 119.47
331 TestNetworkPlugins/group/kindnet/Start 96.75
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.03
334 TestStartStop/group/newest-cni/serial/Stop 8.42
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
336 TestStartStop/group/newest-cni/serial/SecondStart 54.17
337 TestNetworkPlugins/group/auto/KubeletFlags 0.25
338 TestNetworkPlugins/group/auto/NetCatPod 10.23
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/newest-cni/serial/Pause 2.63
344 TestNetworkPlugins/group/auto/DNS 0.18
345 TestNetworkPlugins/group/auto/Localhost 0.18
346 TestNetworkPlugins/group/calico/Start 91.19
347 TestNetworkPlugins/group/auto/HairPin 0.17
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
349 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
350 TestNetworkPlugins/group/kindnet/DNS 0.22
351 TestNetworkPlugins/group/kindnet/Localhost 0.17
352 TestNetworkPlugins/group/kindnet/HairPin 0.2
353 TestNetworkPlugins/group/custom-flannel/Start 93.17
354 TestNetworkPlugins/group/enable-default-cni/Start 145.16
355 TestNetworkPlugins/group/flannel/Start 140.35
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.23
358 TestNetworkPlugins/group/calico/NetCatPod 12.28
359 TestNetworkPlugins/group/calico/DNS 0.22
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
361 TestNetworkPlugins/group/calico/Localhost 0.2
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
363 TestNetworkPlugins/group/calico/HairPin 0.19
364 TestNetworkPlugins/group/custom-flannel/DNS 0.25
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
367 TestNetworkPlugins/group/bridge/Start 98.71
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.58
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
372 TestNetworkPlugins/group/flannel/NetCatPod 10.26
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
376 TestNetworkPlugins/group/flannel/DNS 0.24
377 TestNetworkPlugins/group/flannel/Localhost 0.18
378 TestNetworkPlugins/group/flannel/HairPin 0.19
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
380 TestNetworkPlugins/group/bridge/NetCatPod 11.22
381 TestNetworkPlugins/group/bridge/DNS 0.16
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (8.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029776 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029776 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.12626335s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029776
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029776: exit status 85 (71.787902ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC |          |
	|         | -p download-only-029776        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:55:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:55:47.069601  919000 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:55:47.069729  919000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:55:47.069738  919000 out.go:304] Setting ErrFile to fd 2...
	I0308 02:55:47.069742  919000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:55:47.069927  919000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	W0308 02:55:47.070059  919000 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18333-911675/.minikube/config/config.json: open /home/jenkins/minikube-integration/18333-911675/.minikube/config/config.json: no such file or directory
	I0308 02:55:47.070613  919000 out.go:298] Setting JSON to true
	I0308 02:55:47.071558  919000 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":23873,"bootTime":1709842674,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:55:47.071629  919000 start.go:139] virtualization: kvm guest
	I0308 02:55:47.074159  919000 out.go:97] [download-only-029776] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:55:47.075598  919000 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:55:47.074318  919000 notify.go:220] Checking for updates...
	W0308 02:55:47.074357  919000 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball: no such file or directory
	I0308 02:55:47.078331  919000 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:55:47.079701  919000 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 02:55:47.080988  919000 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:55:47.082265  919000 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0308 02:55:47.084869  919000 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0308 02:55:47.085221  919000 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 02:55:47.116238  919000 out.go:97] Using the kvm2 driver based on user configuration
	I0308 02:55:47.116301  919000 start.go:297] selected driver: kvm2
	I0308 02:55:47.116311  919000 start.go:901] validating driver "kvm2" against <nil>
	I0308 02:55:47.116633  919000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:55:47.116725  919000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18333-911675/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0308 02:55:47.132627  919000 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0308 02:55:47.132682  919000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 02:55:47.133181  919000 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0308 02:55:47.133402  919000 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0308 02:55:47.133494  919000 cni.go:84] Creating CNI manager for ""
	I0308 02:55:47.133508  919000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0308 02:55:47.133516  919000 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0308 02:55:47.133571  919000 start.go:340] cluster config:
	{Name:download-only-029776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-029776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 02:55:47.133731  919000 iso.go:125] acquiring lock: {Name:mk32d156c748b457afd5db822e9825f7e52fc960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 02:55:47.135446  919000 out.go:97] Downloading VM boot image ...
	I0308 02:55:47.135493  919000 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0308 02:55:49.947153  919000 out.go:97] Starting "download-only-029776" primary control-plane node in "download-only-029776" cluster
	I0308 02:55:49.947188  919000 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 02:55:49.963967  919000 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0308 02:55:49.964001  919000 cache.go:56] Caching tarball of preloaded images
	I0308 02:55:49.964107  919000 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0308 02:55:49.965787  919000 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0308 02:55:49.965803  919000 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0308 02:55:49.990754  919000 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18333-911675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-029776 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029776"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029776
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-925127 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-925127 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.621056528s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-925127
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-925127: exit status 85 (75.73166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC |                     |
	|         | -p download-only-029776        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC | 08 Mar 24 02:55 UTC |
	| delete  | -p download-only-029776        | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC | 08 Mar 24 02:55 UTC |
	| start   | -o=json --download-only        | download-only-925127 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC |                     |
	|         | -p download-only-925127        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:55:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:55:55.550111  919166 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:55:55.550225  919166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:55:55.550233  919166 out.go:304] Setting ErrFile to fd 2...
	I0308 02:55:55.550238  919166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:55:55.550472  919166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 02:55:55.551092  919166 out.go:298] Setting JSON to true
	I0308 02:55:55.552022  919166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":23882,"bootTime":1709842674,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:55:55.552097  919166 start.go:139] virtualization: kvm guest
	I0308 02:55:55.554345  919166 out.go:97] [download-only-925127] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:55:55.554482  919166 notify.go:220] Checking for updates...
	I0308 02:55:55.555886  919166 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:55:55.557559  919166 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:55:55.559055  919166 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 02:55:55.560355  919166 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:55:55.561721  919166 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-925127 host does not exist
	  To start a cluster, run: "minikube start -p download-only-925127"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-925127
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-219734 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-219734 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.784887566s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-219734
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-219734: exit status 85 (81.745003ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC |                     |
	|         | -p download-only-029776           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC | 08 Mar 24 02:55 UTC |
	| delete  | -p download-only-029776           | download-only-029776 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC | 08 Mar 24 02:55 UTC |
	| start   | -o=json --download-only           | download-only-925127 | jenkins | v1.32.0 | 08 Mar 24 02:55 UTC |                     |
	|         | -p download-only-925127           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| delete  | -p download-only-925127           | download-only-925127 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC | 08 Mar 24 02:56 UTC |
	| start   | -o=json --download-only           | download-only-219734 | jenkins | v1.32.0 | 08 Mar 24 02:56 UTC |                     |
	|         | -p download-only-219734           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 02:56:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 02:56:00.535664  919318 out.go:291] Setting OutFile to fd 1 ...
	I0308 02:56:00.535807  919318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:56:00.535817  919318 out.go:304] Setting ErrFile to fd 2...
	I0308 02:56:00.535821  919318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 02:56:00.536021  919318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 02:56:00.536626  919318 out.go:298] Setting JSON to true
	I0308 02:56:00.537575  919318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":23887,"bootTime":1709842674,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 02:56:00.537654  919318 start.go:139] virtualization: kvm guest
	I0308 02:56:00.540211  919318 out.go:97] [download-only-219734] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 02:56:00.541909  919318 out.go:169] MINIKUBE_LOCATION=18333
	I0308 02:56:00.540409  919318 notify.go:220] Checking for updates...
	I0308 02:56:00.543434  919318 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 02:56:00.544805  919318 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 02:56:00.546144  919318 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 02:56:00.547692  919318 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-219734 host does not exist
	  To start a cluster, run: "minikube start -p download-only-219734"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-219734
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-920537 --alsologtostderr --binary-mirror http://127.0.0.1:33887 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-920537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-920537
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (132.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-290342 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-290342 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m11.22404401s)
helpers_test.go:175: Cleaning up "offline-crio-290342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-290342
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-290342: (1.144732199s)
--- PASS: TestOffline (132.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-963897
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-963897: exit status 85 (65.717594ms)

                                                
                                                
-- stdout --
	* Profile "addons-963897" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963897"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-963897
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-963897: exit status 85 (66.362483ms)

                                                
                                                
-- stdout --
	* Profile "addons-963897" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963897"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (145.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-963897 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-963897 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.578604971s)
--- PASS: TestAddons/Setup (145.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 42.268797ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rs9mh" [96e3cb85-f90b-45c0-b9d4-9c2c2da9ad88] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004810313s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4snpq" [07f9d0bd-1ed8-4806-826e-1720b7cf2dbf] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005535248s
addons_test.go:340: (dbg) Run:  kubectl --context addons-963897 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-963897 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-963897 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.681880943s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 ip
2024/03/08 02:58:47 [DEBUG] GET http://192.168.39.212:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pbxhb" [9088a257-c8b8-4fa8-b8e7-ce428f8312bf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005270791s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-963897
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-963897: (6.042118007s)
--- PASS: TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.810922ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-szqb7" [6456987a-f2c2-4dd8-9fd2-268027357dff] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008297859s
addons_test.go:415: (dbg) Run:  kubectl --context addons-963897 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.268145ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gk6tb" [c562c869-b9e4-4778-b548-5329e8e7ff62] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015898107s
addons_test.go:473: (dbg) Run:  kubectl --context addons-963897 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-963897 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.648879762s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 43.42253ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-963897 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-963897 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e4af0370-5388-444d-8293-568e56ccb6ef] Pending
helpers_test.go:344: "task-pv-pod" [e4af0370-5388-444d-8293-568e56ccb6ef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e4af0370-5388-444d-8293-568e56ccb6ef] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003889079s
addons_test.go:584: (dbg) Run:  kubectl --context addons-963897 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-963897 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-963897 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-963897 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-963897 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-963897 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-963897 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4e259bab-61b8-43a8-b21d-f183eee42521] Pending
helpers_test.go:344: "task-pv-pod-restore" [4e259bab-61b8-43a8-b21d-f183eee42521] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4e259bab-61b8-43a8-b21d-f183eee42521] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005084289s
addons_test.go:626: (dbg) Run:  kubectl --context addons-963897 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-963897 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-963897 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-963897 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823924283s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-963897 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-963897 --alsologtostderr -v=1: (2.009495431s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-frnvt" [f8ff87d5-f64c-4696-97eb-f95b48854ffb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-frnvt" [f8ff87d5-f64c-4696-97eb-f95b48854ffb] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.010083353s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-dmbhp" [7040878f-bbc3-49e6-ae19-6f598edf1e1c] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.024028066s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-963897
--- PASS: TestAddons/parallel/CloudSpanner (5.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-963897 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-963897 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3b1afe2e-772f-48ec-8b67-091a0399fb52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3b1afe2e-772f-48ec-8b67-091a0399fb52] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3b1afe2e-772f-48ec-8b67-091a0399fb52] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004759182s
addons_test.go:891: (dbg) Run:  kubectl --context addons-963897 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 ssh "cat /opt/local-path-provisioner/pvc-23f464d9-185e-46fe-9762-6116259b684b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-963897 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-963897 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-963897 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-963897 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.701023264s)
--- PASS: TestAddons/parallel/LocalPath (53.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bcff4" [8fec37a2-1bb5-4f90-ada2-d022b6694cf3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005817889s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-963897
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-62pxk" [0eff270d-61f4-4227-a0b2-996e1279ceb0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004795091s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-963897 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-963897 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (98.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-576568 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0308 04:02:35.054507  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 04:02:52.008504  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-576568 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m36.680628922s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-576568 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-576568 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-576568 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-576568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-576568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-576568: (1.175372591s)
--- PASS: TestCertOptions (98.45s)

                                                
                                    
x
+
TestCertExpiration (283.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-401581 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-401581 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.880876335s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-401581 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-401581 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.894641547s)
helpers_test.go:175: Cleaning up "cert-expiration-401581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-401581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-401581: (1.036258249s)
--- PASS: TestCertExpiration (283.81s)

                                                
                                    
x
+
TestForceSystemdFlag (59.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-786598 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-786598 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.244478776s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-786598 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-786598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-786598
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-786598: (1.067783408s)
--- PASS: TestForceSystemdFlag (59.53s)

                                                
                                    
x
+
TestForceSystemdEnv (69.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-292856 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0308 04:03:32.256698  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-292856 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.014295123s)
helpers_test.go:175: Cleaning up "force-systemd-env-292856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-292856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-292856: (1.02051281s)
--- PASS: TestForceSystemdEnv (69.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.35s)

                                                
                                    
x
+
TestErrorSpam/setup (44.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-713557 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-713557 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-713557 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-713557 --driver=kvm2  --container-runtime=crio: (44.723636011s)
--- PASS: TestErrorSpam/setup (44.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (4.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop: (2.299398103s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop: (1.044971227s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-713557 --log_dir /tmp/nospam-713557 stop: (1.512630482s)
--- PASS: TestErrorSpam/stop (4.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18333-911675/.minikube/files/etc/test/nested/copy/918988/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-576754 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.012048687s)
--- PASS: TestFunctional/serial/StartWithProxy (61.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-576754 --alsologtostderr -v=8: (36.033288258s)
functional_test.go:659: soft start took 36.03391721s for "functional-576754" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-576754 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:3.1: (1.036734953s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:3.3: (1.109724235s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 cache add registry.k8s.io/pause:latest: (1.063731328s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-576754 /tmp/TestFunctionalserialCacheCmdcacheadd_local1341969455/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache add minikube-local-cache-test:functional-576754
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache delete minikube-local-cache-test:functional-576754
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-576754
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.034866ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 kubectl -- --context functional-576754 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-576754 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-576754 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.4354632s)
functional_test.go:757: restart took 33.435578031s for "functional-576754" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-576754 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 logs: (1.66430138s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 logs --file /tmp/TestFunctionalserialLogsFileCmd3146406226/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 logs --file /tmp/TestFunctionalserialLogsFileCmd3146406226/001/logs.txt: (1.807140357s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-576754 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-576754
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-576754: exit status 115 (297.705772ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.126:30281 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-576754 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 config get cpus: exit status 14 (83.520886ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 config get cpus: exit status 14 (59.222414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-576754 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-576754 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 926948: os: process already finished
E0308 03:08:37.378007  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (30.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-576754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (230.460924ms)

                                                
                                                
-- stdout --
	* [functional-576754] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:08:03.146154  926182 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:08:03.146358  926182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:03.146366  926182 out.go:304] Setting ErrFile to fd 2...
	I0308 03:08:03.146374  926182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:03.146703  926182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:08:03.147530  926182 out.go:298] Setting JSON to false
	I0308 03:08:03.149016  926182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24609,"bootTime":1709842674,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:08:03.149141  926182 start.go:139] virtualization: kvm guest
	I0308 03:08:03.152032  926182 out.go:177] * [functional-576754] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 03:08:03.153452  926182 notify.go:220] Checking for updates...
	I0308 03:08:03.153976  926182 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:08:03.155461  926182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:08:03.157431  926182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:08:03.158625  926182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:03.159864  926182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:08:03.161120  926182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:08:03.162898  926182 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:08:03.163604  926182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:03.164027  926182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:03.201032  926182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
	I0308 03:08:03.201692  926182 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:03.202363  926182 main.go:141] libmachine: Using API Version  1
	I0308 03:08:03.202380  926182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:03.202752  926182 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:03.202918  926182 main.go:141] libmachine: (functional-576754) Calling .DriverName
	I0308 03:08:03.203180  926182 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:08:03.203571  926182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:03.203607  926182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:03.228571  926182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0308 03:08:03.229037  926182 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:03.229611  926182 main.go:141] libmachine: Using API Version  1
	I0308 03:08:03.229645  926182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:03.230585  926182 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:03.230784  926182 main.go:141] libmachine: (functional-576754) Calling .DriverName
	I0308 03:08:03.276731  926182 out.go:177] * Using the kvm2 driver based on existing profile
	I0308 03:08:03.278145  926182 start.go:297] selected driver: kvm2
	I0308 03:08:03.278159  926182 start.go:901] validating driver "kvm2" against &{Name:functional-576754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-576754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:08:03.278263  926182 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:08:03.280823  926182 out.go:177] 
	W0308 03:08:03.282198  926182 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0308 03:08:03.283527  926182 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-576754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-576754 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.382009ms)

                                                
                                                
-- stdout --
	* [functional-576754] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:08:03.503777  926329 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:08:03.503963  926329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:03.503977  926329 out.go:304] Setting ErrFile to fd 2...
	I0308 03:08:03.503983  926329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:08:03.504286  926329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:08:03.504860  926329 out.go:298] Setting JSON to false
	I0308 03:08:03.505976  926329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":24610,"bootTime":1709842674,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 03:08:03.506040  926329 start.go:139] virtualization: kvm guest
	I0308 03:08:03.508077  926329 out.go:177] * [functional-576754] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0308 03:08:03.509802  926329 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 03:08:03.509794  926329 notify.go:220] Checking for updates...
	I0308 03:08:03.511152  926329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 03:08:03.512498  926329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 03:08:03.513674  926329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 03:08:03.515218  926329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 03:08:03.516624  926329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 03:08:03.518566  926329 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:08:03.519227  926329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:03.519281  926329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:03.535780  926329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0308 03:08:03.536271  926329 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:03.536899  926329 main.go:141] libmachine: Using API Version  1
	I0308 03:08:03.536921  926329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:03.537421  926329 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:03.537683  926329 main.go:141] libmachine: (functional-576754) Calling .DriverName
	I0308 03:08:03.538029  926329 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 03:08:03.538477  926329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:08:03.538523  926329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:08:03.554456  926329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0308 03:08:03.554940  926329 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:08:03.555476  926329 main.go:141] libmachine: Using API Version  1
	I0308 03:08:03.555509  926329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:08:03.555929  926329 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:08:03.556171  926329 main.go:141] libmachine: (functional-576754) Calling .DriverName
	I0308 03:08:03.593839  926329 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0308 03:08:03.595232  926329 start.go:297] selected driver: kvm2
	I0308 03:08:03.595251  926329 start.go:901] validating driver "kvm2" against &{Name:functional-576754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-576754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 03:08:03.595371  926329 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 03:08:03.597522  926329 out.go:177] 
	W0308 03:08:03.598888  926329 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0308 03:08:03.600326  926329 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-576754 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-576754 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8ghhw" [00dc6f75-0039-424f-9d21-e1fc37d326a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8ghhw" [00dc6f75-0039-424f-9d21-e1fc37d326a2] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004915399s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.126:32335
functional_test.go:1671: http://192.168.39.126:32335: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8ghhw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.126:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.126:32335
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3ec0914f-dc62-441c-baf1-03f5b4b9603d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007845273s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-576754 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-576754 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-576754 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-576754 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-576754 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ea62ea5-7c3b-4c31-a3ca-22485edf6cb8] Pending
helpers_test.go:344: "sp-pod" [8ea62ea5-7c3b-4c31-a3ca-22485edf6cb8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ea62ea5-7c3b-4c31-a3ca-22485edf6cb8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004195417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-576754 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-576754 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-576754 delete -f testdata/storage-provisioner/pod.yaml: (3.249601556s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-576754 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b39b1428-8f29-43f6-9c4c-9e8224311164] Pending
helpers_test.go:344: "sp-pod" [b39b1428-8f29-43f6-9c4c-9e8224311164] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b39b1428-8f29-43f6-9c4c-9e8224311164] Running
E0308 03:08:34.816848  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
2024/03/08 03:08:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004912905s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-576754 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh -n functional-576754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cp functional-576754:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd150482591/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh -n functional-576754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh -n functional-576754 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-576754 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hwfzw" [d9a86713-3e58-4f41-8594-cd817e0c16ae] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hwfzw" [d9a86713-3e58-4f41-8594-cd817e0c16ae] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.028512036s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-576754 exec mysql-859648c796-hwfzw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-576754 exec mysql-859648c796-hwfzw -- mysql -ppassword -e "show databases;": exit status 1 (392.086208ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-576754 exec mysql-859648c796-hwfzw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-576754 exec mysql-859648c796-hwfzw -- mysql -ppassword -e "show databases;": exit status 1 (241.444118ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-576754 exec mysql-859648c796-hwfzw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/918988/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /etc/test/nested/copy/918988/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/918988.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /etc/ssl/certs/918988.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/918988.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /usr/share/ca-certificates/918988.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9189882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /etc/ssl/certs/9189882.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9189882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /usr/share/ca-certificates/9189882.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-576754 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "sudo systemctl is-active docker": exit status 1 (309.547252ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "sudo systemctl is-active containerd": exit status 1 (350.37185ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-576754 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-576754 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rh8pv" [c4319fe5-87da-4ea0-99a5-09474f68bedf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rh8pv" [c4319fe5-87da-4ea0-99a5-09474f68bedf] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004006323s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "341.575725ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "62.919487ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "345.36234ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "63.068705ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdany-port147620404/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709867275388740943" to /tmp/TestFunctionalparallelMountCmdany-port147620404/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709867275388740943" to /tmp/TestFunctionalparallelMountCmdany-port147620404/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709867275388740943" to /tmp/TestFunctionalparallelMountCmdany-port147620404/001/test-1709867275388740943
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.519215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  8 03:07 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  8 03:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  8 03:07 test-1709867275388740943
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh cat /mount-9p/test-1709867275388740943
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-576754 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [082d7f9d-22b7-48b1-a886-a6b71bf2bb73] Pending
helpers_test.go:344: "busybox-mount" [082d7f9d-22b7-48b1-a886-a6b71bf2bb73] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [082d7f9d-22b7-48b1-a886-a6b71bf2bb73] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [082d7f9d-22b7-48b1-a886-a6b71bf2bb73] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004459607s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-576754 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdany-port147620404/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdspecific-port3044941190/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.339495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdspecific-port3044941190/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "sudo umount -f /mount-9p": exit status 1 (273.376102ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-576754 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdspecific-port3044941190/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T" /mount1: exit status 1 (352.149837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-576754 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-576754 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1292891308/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service list -o json
functional_test.go:1490: Took "435.277819ms" to run "out/minikube-linux-amd64 -p functional-576754 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576754 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-576754
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576754 image ls --format short --alsologtostderr:
I0308 03:08:25.612212  927497 out.go:291] Setting OutFile to fd 1 ...
I0308 03:08:25.612381  927497 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:25.612392  927497 out.go:304] Setting ErrFile to fd 2...
I0308 03:08:25.612399  927497 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:25.612703  927497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
I0308 03:08:25.613498  927497 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:25.613657  927497 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:25.614232  927497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:25.614296  927497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:25.630627  927497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
I0308 03:08:25.631155  927497 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:25.631938  927497 main.go:141] libmachine: Using API Version  1
I0308 03:08:25.631967  927497 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:25.632372  927497 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:25.632626  927497 main.go:141] libmachine: (functional-576754) Calling .GetState
I0308 03:08:25.634547  927497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:25.634597  927497 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:25.650412  927497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
I0308 03:08:25.650887  927497 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:25.651439  927497 main.go:141] libmachine: Using API Version  1
I0308 03:08:25.651466  927497 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:25.651823  927497 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:25.652069  927497 main.go:141] libmachine: (functional-576754) Calling .DriverName
I0308 03:08:25.652320  927497 ssh_runner.go:195] Run: systemctl --version
I0308 03:08:25.652351  927497 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
I0308 03:08:25.655342  927497 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:25.655778  927497 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
I0308 03:08:25.655812  927497 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:25.655945  927497 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
I0308 03:08:25.656134  927497 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
I0308 03:08:25.656307  927497 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
I0308 03:08:25.656494  927497 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
I0308 03:08:25.764341  927497 ssh_runner.go:195] Run: sudo crictl images --output json
I0308 03:08:25.867629  927497 main.go:141] libmachine: Making call to close driver server
I0308 03:08:25.867650  927497 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:25.867976  927497 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:25.868002  927497 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:25.868027  927497 main.go:141] libmachine: Making call to close driver server
I0308 03:08:25.868038  927497 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:25.868046  927497 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:25.868363  927497 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:25.868378  927497 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:25.868420  927497 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls --format table --alsologtostderr
E0308 03:08:32.895080  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576754 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-576754  | 3b7a9e29cfb80 | 3.35kB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-576754  | ce583bf6ff8ef | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | e4720093a3c13 | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576754 image ls --format table --alsologtostderr:
I0308 03:08:32.768708  927693 out.go:291] Setting OutFile to fd 1 ...
I0308 03:08:32.768835  927693 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:32.768843  927693 out.go:304] Setting ErrFile to fd 2...
I0308 03:08:32.768847  927693 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:32.769015  927693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
I0308 03:08:32.769624  927693 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:32.769714  927693 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:32.770063  927693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:32.770105  927693 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:32.784898  927693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
I0308 03:08:32.785446  927693 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:32.786153  927693 main.go:141] libmachine: Using API Version  1
I0308 03:08:32.786185  927693 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:32.786546  927693 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:32.786794  927693 main.go:141] libmachine: (functional-576754) Calling .GetState
I0308 03:08:32.788760  927693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:32.788812  927693 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:32.803785  927693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
I0308 03:08:32.804231  927693 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:32.804746  927693 main.go:141] libmachine: Using API Version  1
I0308 03:08:32.804788  927693 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:32.805154  927693 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:32.805413  927693 main.go:141] libmachine: (functional-576754) Calling .DriverName
I0308 03:08:32.805653  927693 ssh_runner.go:195] Run: systemctl --version
I0308 03:08:32.805678  927693 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
I0308 03:08:32.808549  927693 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:32.808979  927693 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
I0308 03:08:32.809016  927693 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:32.809208  927693 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
I0308 03:08:32.809408  927693 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
I0308 03:08:32.809593  927693 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
I0308 03:08:32.809729  927693 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
I0308 03:08:32.896645  927693 ssh_runner.go:195] Run: sudo crictl images --output json
I0308 03:08:32.947409  927693 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.947426  927693 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.947738  927693 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:32.947733  927693 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.947783  927693 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:32.947798  927693 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.947809  927693 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.948070  927693 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.948092  927693 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:32.948099  927693 main.go:141] libmachine: Making call to close connection to plugin binary
E0308 03:08:33.536089  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls --format json --alsologtostderr
E0308 03:08:32.574431  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576754 image ls --format json --alsologtostderr:
[{"id":"3b7a9e29cfb80efa702be76be46f460857ee808acb11607a6a41b22a98ecf4b7","repoDigests":["localhost/minikube-local-cache-test@sha256:d114f4bf7d5292406feaa50c76a55a363f2e68f6cc531e22a8217539cc1001e2"],"repoTags":["localhost/minikube-local-cache-test:functional-576754"],"size":"3345"},{"id":"ce583bf6ff8efb35ac4b25fe62ba20b0f349dc72d1091f9c30e8f860883787bb","repoDigests":["localhost/my-image@sha256:b3075cff949b662f5dc98c32873365a5bb216a3f7aa4fedffaa049b89c48ae68"],"repoTags":["localhost/my-image:functional-576754"],"size":"1468600"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed110
3e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c7296
7bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5d
c3029"],"repoTags":[],"size":"249229937"},{"id":"31580fc6be97b04246f96b8f2e8d612c804e33bf08a505895fce9dfb82151ba9","repoDigests":["docker.io/library/d1b757e091891315e73f6f996549929c8e6d6c52549db480df8004a9dacafa97-tmp@sha256:b763f9249b3918f8761987cff3f677596d23e4eeab554beeed399e8318ce1895"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"si
ze":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898
bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":["docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71","docker.io/library/nginx@sha256:c26ae7472d624
ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865895"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd2
77787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576754 image ls --format json --alsologtostderr:
I0308 03:08:32.542122  927669 out.go:291] Setting OutFile to fd 1 ...
I0308 03:08:32.542602  927669 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:32.542621  927669 out.go:304] Setting ErrFile to fd 2...
I0308 03:08:32.542629  927669 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:32.543081  927669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
I0308 03:08:32.544412  927669 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:32.544591  927669 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:32.545198  927669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:32.545266  927669 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:32.560209  927669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
I0308 03:08:32.560720  927669 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:32.561499  927669 main.go:141] libmachine: Using API Version  1
I0308 03:08:32.561523  927669 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:32.561936  927669 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:32.562178  927669 main.go:141] libmachine: (functional-576754) Calling .GetState
I0308 03:08:32.564193  927669 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:32.564249  927669 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:32.579149  927669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
I0308 03:08:32.579567  927669 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:32.580138  927669 main.go:141] libmachine: Using API Version  1
I0308 03:08:32.580166  927669 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:32.580480  927669 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:32.580738  927669 main.go:141] libmachine: (functional-576754) Calling .DriverName
I0308 03:08:32.580990  927669 ssh_runner.go:195] Run: systemctl --version
I0308 03:08:32.581023  927669 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
I0308 03:08:32.583781  927669 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:32.584171  927669 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
I0308 03:08:32.584195  927669 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:32.584363  927669 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
I0308 03:08:32.584531  927669 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
I0308 03:08:32.584680  927669 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
I0308 03:08:32.584823  927669 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
I0308 03:08:32.664249  927669 ssh_runner.go:195] Run: sudo crictl images --output json
I0308 03:08:32.706556  927669 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.706571  927669 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.706941  927669 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.706982  927669 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:32.706989  927669 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:32.707002  927669 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.707069  927669 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.707320  927669 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.707346  927669 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:32.707364  927669 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576754 image ls --format yaml --alsologtostderr:
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests:
- docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "190865895"
- id: 3b7a9e29cfb80efa702be76be46f460857ee808acb11607a6a41b22a98ecf4b7
repoDigests:
- localhost/minikube-local-cache-test@sha256:d114f4bf7d5292406feaa50c76a55a363f2e68f6cc531e22a8217539cc1001e2
repoTags:
- localhost/minikube-local-cache-test:functional-576754
size: "3345"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576754 image ls --format yaml --alsologtostderr:
I0308 03:08:25.945810  927521 out.go:291] Setting OutFile to fd 1 ...
I0308 03:08:25.946000  927521 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:25.946016  927521 out.go:304] Setting ErrFile to fd 2...
I0308 03:08:25.946022  927521 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:25.946325  927521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
I0308 03:08:25.947181  927521 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:25.947390  927521 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:25.947959  927521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:25.948025  927521 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:25.964222  927521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
I0308 03:08:25.964840  927521 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:25.965599  927521 main.go:141] libmachine: Using API Version  1
I0308 03:08:25.965625  927521 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:25.966085  927521 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:25.966343  927521 main.go:141] libmachine: (functional-576754) Calling .GetState
I0308 03:08:25.968270  927521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:25.968321  927521 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:25.984582  927521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
I0308 03:08:25.985193  927521 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:25.985854  927521 main.go:141] libmachine: Using API Version  1
I0308 03:08:25.985890  927521 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:25.986281  927521 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:25.986494  927521 main.go:141] libmachine: (functional-576754) Calling .DriverName
I0308 03:08:25.986717  927521 ssh_runner.go:195] Run: systemctl --version
I0308 03:08:25.986751  927521 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
I0308 03:08:25.989482  927521 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:25.989950  927521 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
I0308 03:08:25.989982  927521 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:25.990115  927521 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
I0308 03:08:25.990337  927521 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
I0308 03:08:25.990533  927521 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
I0308 03:08:25.990662  927521 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
I0308 03:08:26.108351  927521 ssh_runner.go:195] Run: sudo crictl images --output json
I0308 03:08:26.200253  927521 main.go:141] libmachine: Making call to close driver server
I0308 03:08:26.200281  927521 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:26.200577  927521 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:26.200595  927521 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:26.200610  927521 main.go:141] libmachine: Making call to close driver server
I0308 03:08:26.200618  927521 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:26.200935  927521 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:26.200952  927521 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-576754 ssh pgrep buildkitd: exit status 1 (253.476333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image build -t localhost/my-image:functional-576754 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 image build -t localhost/my-image:functional-576754 testdata/build --alsologtostderr: (5.769282062s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-576754 image build -t localhost/my-image:functional-576754 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 31580fc6be9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-576754
--> ce583bf6ff8
Successfully tagged localhost/my-image:functional-576754
ce583bf6ff8efb35ac4b25fe62ba20b0f349dc72d1091f9c30e8f860883787bb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-576754 image build -t localhost/my-image:functional-576754 testdata/build --alsologtostderr:
I0308 03:08:26.518795  927586 out.go:291] Setting OutFile to fd 1 ...
I0308 03:08:26.519092  927586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:26.519104  927586 out.go:304] Setting ErrFile to fd 2...
I0308 03:08:26.519108  927586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0308 03:08:26.519285  927586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
I0308 03:08:26.519853  927586 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:26.520424  927586 config.go:182] Loaded profile config "functional-576754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0308 03:08:26.520829  927586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:26.520886  927586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:26.536288  927586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
I0308 03:08:26.536950  927586 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:26.537567  927586 main.go:141] libmachine: Using API Version  1
I0308 03:08:26.537595  927586 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:26.538040  927586 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:26.538271  927586 main.go:141] libmachine: (functional-576754) Calling .GetState
I0308 03:08:26.540143  927586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0308 03:08:26.540183  927586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0308 03:08:26.556255  927586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
I0308 03:08:26.556784  927586 main.go:141] libmachine: () Calling .GetVersion
I0308 03:08:26.557335  927586 main.go:141] libmachine: Using API Version  1
I0308 03:08:26.557364  927586 main.go:141] libmachine: () Calling .SetConfigRaw
I0308 03:08:26.557771  927586 main.go:141] libmachine: () Calling .GetMachineName
I0308 03:08:26.558021  927586 main.go:141] libmachine: (functional-576754) Calling .DriverName
I0308 03:08:26.558260  927586 ssh_runner.go:195] Run: systemctl --version
I0308 03:08:26.558292  927586 main.go:141] libmachine: (functional-576754) Calling .GetSSHHostname
I0308 03:08:26.561761  927586 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:26.562206  927586 main.go:141] libmachine: (functional-576754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:25:d9", ip: ""} in network mk-functional-576754: {Iface:virbr1 ExpiryTime:2024-03-08 04:05:42 +0000 UTC Type:0 Mac:52:54:00:8a:25:d9 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-576754 Clientid:01:52:54:00:8a:25:d9}
I0308 03:08:26.562238  927586 main.go:141] libmachine: (functional-576754) DBG | domain functional-576754 has defined IP address 192.168.39.126 and MAC address 52:54:00:8a:25:d9 in network mk-functional-576754
I0308 03:08:26.562472  927586 main.go:141] libmachine: (functional-576754) Calling .GetSSHPort
I0308 03:08:26.562676  927586 main.go:141] libmachine: (functional-576754) Calling .GetSSHKeyPath
I0308 03:08:26.563377  927586 main.go:141] libmachine: (functional-576754) Calling .GetSSHUsername
I0308 03:08:26.563572  927586 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/functional-576754/id_rsa Username:docker}
I0308 03:08:26.705869  927586 build_images.go:151] Building image from path: /tmp/build.4135419376.tar
I0308 03:08:26.705963  927586 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0308 03:08:26.743509  927586 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4135419376.tar
I0308 03:08:26.758984  927586 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4135419376.tar: stat -c "%s %y" /var/lib/minikube/build/build.4135419376.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4135419376.tar': No such file or directory
I0308 03:08:26.759030  927586 ssh_runner.go:362] scp /tmp/build.4135419376.tar --> /var/lib/minikube/build/build.4135419376.tar (3072 bytes)
I0308 03:08:26.825438  927586 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4135419376
I0308 03:08:26.840962  927586 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4135419376 -xf /var/lib/minikube/build/build.4135419376.tar
I0308 03:08:26.853586  927586 crio.go:297] Building image: /var/lib/minikube/build/build.4135419376
I0308 03:08:26.853681  927586 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-576754 /var/lib/minikube/build/build.4135419376 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0308 03:08:32.199405  927586 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-576754 /var/lib/minikube/build/build.4135419376 --cgroup-manager=cgroupfs: (5.345684477s)
I0308 03:08:32.199513  927586 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4135419376
I0308 03:08:32.214311  927586 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4135419376.tar
I0308 03:08:32.226617  927586 build_images.go:207] Built localhost/my-image:functional-576754 from /tmp/build.4135419376.tar
I0308 03:08:32.226658  927586 build_images.go:123] succeeded building to: functional-576754
I0308 03:08:32.226664  927586 build_images.go:124] failed building to: 
I0308 03:08:32.226691  927586 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.226704  927586 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.227016  927586 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.227032  927586 main.go:141] libmachine: Making call to close connection to plugin binary
I0308 03:08:32.227041  927586 main.go:141] libmachine: Making call to close driver server
I0308 03:08:32.227048  927586 main.go:141] libmachine: (functional-576754) Calling .Close
I0308 03:08:32.227313  927586 main.go:141] libmachine: (functional-576754) DBG | Closing plugin on server side
I0308 03:08:32.227342  927586 main.go:141] libmachine: Successfully made call to close driver server
I0308 03:08:32.227357  927586 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls
E0308 03:08:32.256500  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:32.262498  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:32.272815  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:32.293145  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:32.333453  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:32.413752  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.218370432s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-576754
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.126:30993
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr: (4.716055836s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.126:30993
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr: (2.717216319s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-576754
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-576754 image load --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr: (5.725084566s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image rm gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-576754
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-576754 image save --daemon gcr.io/google-containers/addon-resizer:functional-576754 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-576754
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-576754
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-576754
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-576754
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (225.18s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-576225 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0308 03:08:42.499068  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:08:52.739401  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:09:13.219754  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:09:54.181443  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:11:16.102231  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-576225 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m44.442550613s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (225.18s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (4.83s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-576225 -- rollout status deployment/busybox: (2.270326883s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-9594n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-cc27d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-wlj7r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-9594n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-cc27d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-wlj7r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-9594n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-cc27d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-wlj7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (4.83s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-9594n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-9594n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-cc27d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-cc27d -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-wlj7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-576225 -- exec busybox-5b5d89c9d6-wlj7r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.44s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (44.28s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-576225 -v=7 --alsologtostderr
E0308 03:12:52.009034  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.014367  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.024685  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.045025  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.085367  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.165743  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.326231  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:52.646823  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:53.287785  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:54.568345  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:12:57.128968  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:13:02.249618  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:13:12.489984  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-576225 -v=7 --alsologtostderr: (43.394352019s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (44.28s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-576225 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (13.84s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp testdata/cp-test.txt ha-576225:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225:/home/docker/cp-test.txt ha-576225-m02:/home/docker/cp-test_ha-576225_ha-576225-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test_ha-576225_ha-576225-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225:/home/docker/cp-test.txt ha-576225-m03:/home/docker/cp-test_ha-576225_ha-576225-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test_ha-576225_ha-576225-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225:/home/docker/cp-test.txt ha-576225-m04:/home/docker/cp-test_ha-576225_ha-576225-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test_ha-576225_ha-576225-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp testdata/cp-test.txt ha-576225-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m02:/home/docker/cp-test.txt ha-576225:/home/docker/cp-test_ha-576225-m02_ha-576225.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test_ha-576225-m02_ha-576225.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m02:/home/docker/cp-test.txt ha-576225-m03:/home/docker/cp-test_ha-576225-m02_ha-576225-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test_ha-576225-m02_ha-576225-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m02:/home/docker/cp-test.txt ha-576225-m04:/home/docker/cp-test_ha-576225-m02_ha-576225-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test_ha-576225-m02_ha-576225-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp testdata/cp-test.txt ha-576225-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt ha-576225:/home/docker/cp-test_ha-576225-m03_ha-576225.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test_ha-576225-m03_ha-576225.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt ha-576225-m02:/home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test_ha-576225-m03_ha-576225-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m03:/home/docker/cp-test.txt ha-576225-m04:/home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test_ha-576225-m03_ha-576225-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp testdata/cp-test.txt ha-576225-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1241973602/001/cp-test_ha-576225-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt ha-576225:/home/docker/cp-test_ha-576225-m04_ha-576225.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225 "sudo cat /home/docker/cp-test_ha-576225-m04_ha-576225.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt ha-576225-m02:/home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m02 "sudo cat /home/docker/cp-test_ha-576225-m04_ha-576225-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 cp ha-576225-m04:/home/docker/cp-test.txt ha-576225-m03:/home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 ssh -n ha-576225-m03 "sudo cat /home/docker/cp-test_ha-576225-m04_ha-576225-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (13.84s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.52s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.516197473s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.52s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (17.43s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-576225 node delete m03 -v=7 --alsologtostderr: (16.629925872s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (17.43s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (334.81s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-576225 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0308 03:27:52.008011  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:28:32.256277  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:29:15.053014  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-576225 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m33.976922453s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (334.81s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (77.13s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-576225 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-576225 --control-plane -v=7 --alsologtostderr: (1m16.248018137s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-576225 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (77.13s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-423393 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0308 03:32:52.007979  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:33:32.256591  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-423393 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.306718172s)
--- PASS: TestJSONOutput/start/Command (96.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-423393 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-423393 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-423393 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-423393 --output=json --user=testUser: (7.470854638s)
--- PASS: TestJSONOutput/stop/Command (7.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-094549 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-094549 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.60395ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b0bb4d1-daa5-41de-835d-24e27958c506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-094549] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b847b973-c42f-4d6e-a487-636b701b55ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18333"}}
	{"specversion":"1.0","id":"56f8da39-f269-4958-9ddd-9cfd71c2d1dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6ea8f72-4495-4f1d-a937-28e214f734e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig"}}
	{"specversion":"1.0","id":"1f9d7f53-2e87-4bfd-b380-34287f06b823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube"}}
	{"specversion":"1.0","id":"5c455b72-e188-4605-963d-08ba69417f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"556a12fd-a656-423a-8847-214c6a0cb2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4a25f636-6fee-4932-93e4-3f43f26d1404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-094549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-094549
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-327546 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-327546 --driver=kvm2  --container-runtime=crio: (46.720143858s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-330857 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-330857 --driver=kvm2  --container-runtime=crio: (46.399235714s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-327546
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-330857
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-330857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-330857
helpers_test.go:175: Cleaning up "first-327546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-327546
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-327546: (1.00473534s)
--- PASS: TestMinikubeProfile (96.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-793980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-793980 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.827959397s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-793980 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-793980 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-818204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-818204 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.985261883s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-793980 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-818204
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-818204: (1.420012553s)
--- PASS: TestMountStart/serial/Stop (1.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-818204
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-818204: (22.575080092s)
--- PASS: TestMountStart/serial/RestartStopped (23.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-818204 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0308 03:37:52.009775  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:38:32.256734  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.856413553s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-959285 -- rollout status deployment/busybox: (1.844255975s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-g8bd8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-mmt2r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-g8bd8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-mmt2r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-g8bd8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-mmt2r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-g8bd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-g8bd8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-mmt2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-959285 -- exec busybox-5b5d89c9d6-mmt2r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-959285 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-959285 -v 3 --alsologtostderr: (38.673535265s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-959285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp testdata/cp-test.txt multinode-959285:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285:/home/docker/cp-test.txt multinode-959285-m02:/home/docker/cp-test_multinode-959285_multinode-959285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test_multinode-959285_multinode-959285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285:/home/docker/cp-test.txt multinode-959285-m03:/home/docker/cp-test_multinode-959285_multinode-959285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test_multinode-959285_multinode-959285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp testdata/cp-test.txt multinode-959285-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt multinode-959285:/home/docker/cp-test_multinode-959285-m02_multinode-959285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test_multinode-959285-m02_multinode-959285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m02:/home/docker/cp-test.txt multinode-959285-m03:/home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test_multinode-959285-m02_multinode-959285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp testdata/cp-test.txt multinode-959285-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653434620/001/cp-test_multinode-959285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt multinode-959285:/home/docker/cp-test_multinode-959285-m03_multinode-959285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285 "sudo cat /home/docker/cp-test_multinode-959285-m03_multinode-959285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 cp multinode-959285-m03:/home/docker/cp-test.txt multinode-959285-m02:/home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 ssh -n multinode-959285-m02 "sudo cat /home/docker/cp-test_multinode-959285-m03_multinode-959285-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-959285 node stop m03: (2.297001873s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959285 status: exit status 7 (460.506577ms)

                                                
                                                
-- stdout --
	multinode-959285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-959285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-959285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr: exit status 7 (433.63703ms)

                                                
                                                
-- stdout --
	multinode-959285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-959285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-959285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 03:40:14.223891  943513 out.go:291] Setting OutFile to fd 1 ...
	I0308 03:40:14.224355  943513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:40:14.224375  943513 out.go:304] Setting ErrFile to fd 2...
	I0308 03:40:14.224383  943513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 03:40:14.224848  943513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 03:40:14.225385  943513 out.go:298] Setting JSON to false
	I0308 03:40:14.225522  943513 notify.go:220] Checking for updates...
	I0308 03:40:14.225600  943513 mustload.go:65] Loading cluster: multinode-959285
	I0308 03:40:14.226180  943513 config.go:182] Loaded profile config "multinode-959285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 03:40:14.226203  943513 status.go:255] checking status of multinode-959285 ...
	I0308 03:40:14.226690  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.226756  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.242202  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0308 03:40:14.242580  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.243134  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.243157  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.243582  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.243806  943513 main.go:141] libmachine: (multinode-959285) Calling .GetState
	I0308 03:40:14.245501  943513 status.go:330] multinode-959285 host status = "Running" (err=<nil>)
	I0308 03:40:14.245519  943513 host.go:66] Checking if "multinode-959285" exists ...
	I0308 03:40:14.245826  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.245883  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.260827  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I0308 03:40:14.261196  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.261678  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.261699  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.262031  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.262225  943513 main.go:141] libmachine: (multinode-959285) Calling .GetIP
	I0308 03:40:14.264874  943513 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:40:14.265341  943513 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:40:14.265381  943513 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:40:14.265508  943513 host.go:66] Checking if "multinode-959285" exists ...
	I0308 03:40:14.265811  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.265859  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.280249  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I0308 03:40:14.280610  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.281048  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.281069  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.281395  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.281586  943513 main.go:141] libmachine: (multinode-959285) Calling .DriverName
	I0308 03:40:14.281732  943513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:40:14.281757  943513 main.go:141] libmachine: (multinode-959285) Calling .GetSSHHostname
	I0308 03:40:14.283995  943513 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:40:14.284412  943513 main.go:141] libmachine: (multinode-959285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:7e:26", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:37:50 +0000 UTC Type:0 Mac:52:54:00:da:7e:26 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-959285 Clientid:01:52:54:00:da:7e:26}
	I0308 03:40:14.284448  943513 main.go:141] libmachine: (multinode-959285) DBG | domain multinode-959285 has defined IP address 192.168.39.174 and MAC address 52:54:00:da:7e:26 in network mk-multinode-959285
	I0308 03:40:14.284542  943513 main.go:141] libmachine: (multinode-959285) Calling .GetSSHPort
	I0308 03:40:14.284693  943513 main.go:141] libmachine: (multinode-959285) Calling .GetSSHKeyPath
	I0308 03:40:14.284868  943513 main.go:141] libmachine: (multinode-959285) Calling .GetSSHUsername
	I0308 03:40:14.285017  943513 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285/id_rsa Username:docker}
	I0308 03:40:14.365443  943513 ssh_runner.go:195] Run: systemctl --version
	I0308 03:40:14.372354  943513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:40:14.389716  943513 kubeconfig.go:125] found "multinode-959285" server: "https://192.168.39.174:8443"
	I0308 03:40:14.389747  943513 api_server.go:166] Checking apiserver status ...
	I0308 03:40:14.389789  943513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 03:40:14.404596  943513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0308 03:40:14.416310  943513 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 03:40:14.416364  943513 ssh_runner.go:195] Run: ls
	I0308 03:40:14.421128  943513 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0308 03:40:14.426167  943513 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0308 03:40:14.426187  943513 status.go:422] multinode-959285 apiserver status = Running (err=<nil>)
	I0308 03:40:14.426197  943513 status.go:257] multinode-959285 status: &{Name:multinode-959285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:40:14.426217  943513 status.go:255] checking status of multinode-959285-m02 ...
	I0308 03:40:14.426532  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.426573  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.442093  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33379
	I0308 03:40:14.442502  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.443047  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.443073  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.443428  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.443619  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetState
	I0308 03:40:14.445062  943513 status.go:330] multinode-959285-m02 host status = "Running" (err=<nil>)
	I0308 03:40:14.445079  943513 host.go:66] Checking if "multinode-959285-m02" exists ...
	I0308 03:40:14.445384  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.445428  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.460444  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0308 03:40:14.460860  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.461330  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.461354  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.461651  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.461793  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetIP
	I0308 03:40:14.464163  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | domain multinode-959285-m02 has defined MAC address 52:54:00:42:0c:44 in network mk-multinode-959285
	I0308 03:40:14.464601  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:0c:44", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:38:51 +0000 UTC Type:0 Mac:52:54:00:42:0c:44 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-959285-m02 Clientid:01:52:54:00:42:0c:44}
	I0308 03:40:14.464627  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | domain multinode-959285-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:42:0c:44 in network mk-multinode-959285
	I0308 03:40:14.464769  943513 host.go:66] Checking if "multinode-959285-m02" exists ...
	I0308 03:40:14.465098  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.465134  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.480173  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I0308 03:40:14.480529  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.480995  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.481014  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.481355  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.481509  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .DriverName
	I0308 03:40:14.481696  943513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 03:40:14.481717  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetSSHHostname
	I0308 03:40:14.484332  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | domain multinode-959285-m02 has defined MAC address 52:54:00:42:0c:44 in network mk-multinode-959285
	I0308 03:40:14.484702  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:0c:44", ip: ""} in network mk-multinode-959285: {Iface:virbr1 ExpiryTime:2024-03-08 04:38:51 +0000 UTC Type:0 Mac:52:54:00:42:0c:44 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:multinode-959285-m02 Clientid:01:52:54:00:42:0c:44}
	I0308 03:40:14.484725  943513 main.go:141] libmachine: (multinode-959285-m02) DBG | domain multinode-959285-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:42:0c:44 in network mk-multinode-959285
	I0308 03:40:14.484845  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetSSHPort
	I0308 03:40:14.485065  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetSSHKeyPath
	I0308 03:40:14.485233  943513 main.go:141] libmachine: (multinode-959285-m02) Calling .GetSSHUsername
	I0308 03:40:14.485405  943513 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18333-911675/.minikube/machines/multinode-959285-m02/id_rsa Username:docker}
	I0308 03:40:14.565318  943513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 03:40:14.581150  943513 status.go:257] multinode-959285-m02 status: &{Name:multinode-959285-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0308 03:40:14.581199  943513 status.go:255] checking status of multinode-959285-m03 ...
	I0308 03:40:14.581541  943513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0308 03:40:14.581583  943513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0308 03:40:14.597738  943513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0308 03:40:14.598210  943513 main.go:141] libmachine: () Calling .GetVersion
	I0308 03:40:14.598761  943513 main.go:141] libmachine: Using API Version  1
	I0308 03:40:14.598786  943513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0308 03:40:14.599105  943513 main.go:141] libmachine: () Calling .GetMachineName
	I0308 03:40:14.599308  943513 main.go:141] libmachine: (multinode-959285-m03) Calling .GetState
	I0308 03:40:14.600744  943513 status.go:330] multinode-959285-m03 host status = "Stopped" (err=<nil>)
	I0308 03:40:14.600758  943513 status.go:343] host is not running, skipping remaining checks
	I0308 03:40:14.600763  943513 status.go:257] multinode-959285-m03 status: &{Name:multinode-959285-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-959285 node start m03 -v=7 --alsologtostderr: (27.264791545s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-959285 node delete m03: (2.031323182s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (194.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959285 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0308 03:48:32.257080  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959285 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.381306545s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-959285 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (194.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-959285
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959285-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-959285-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.039709ms)

                                                
                                                
-- stdout --
	* [multinode-959285-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-959285-m02' is duplicated with machine name 'multinode-959285-m02' in profile 'multinode-959285'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-959285-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-959285-m03 --driver=kvm2  --container-runtime=crio: (47.113342054s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-959285
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-959285: exit status 80 (228.538051ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-959285 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-959285-m03 already exists in multinode-959285-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-959285-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-959285-m03: (1.058336034s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.53s)

                                                
                                    
x
+
TestScheduledStopUnix (116.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-243229 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-243229 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.48886341s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-243229 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-243229 -n scheduled-stop-243229
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-243229 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-243229 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-243229 -n scheduled-stop-243229
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-243229
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-243229 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-243229
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-243229: exit status 7 (76.392466ms)

                                                
                                                
-- stdout --
	scheduled-stop-243229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-243229 -n scheduled-stop-243229
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-243229 -n scheduled-stop-243229: exit status 7 (75.500972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-243229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-243229
--- PASS: TestScheduledStopUnix (116.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (192.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.631260260 start -p running-upgrade-412346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0308 03:57:52.008361  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
E0308 03:58:15.305740  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
E0308 03:58:32.256945  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.631260260 start -p running-upgrade-412346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m48.71161136s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-412346 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-412346 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.962202703s)
helpers_test.go:175: Cleaning up "running-upgrade-412346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-412346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-412346: (1.213338138s)
--- PASS: TestRunningBinaryUpgrade (192.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (202.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1127744690 start -p stopped-upgrade-306267 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1127744690 start -p stopped-upgrade-306267 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.76308773s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1127744690 -p stopped-upgrade-306267 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1127744690 -p stopped-upgrade-306267 stop: (2.139418063s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-306267 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-306267 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.695282822s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (202.60s)

                                                
                                    
x
+
TestPause/serial/Start (98.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-851116 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-851116 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.368533917s)
--- PASS: TestPause/serial/Start (98.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.014515ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-995759] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-995759 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-995759 --driver=kvm2  --container-runtime=crio: (44.474165108s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-995759 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-306267
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --driver=kvm2  --container-runtime=crio: (15.974989679s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-995759 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-995759 status -o json: exit status 2 (250.968886ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-995759","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-995759
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-995759: (1.109604384s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-995759 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.636903514s)
--- PASS: TestNoKubernetes/serial/Start (27.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-995759 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-995759 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.365347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-995759
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-995759: (1.420633492s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (63.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-995759 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-995759 --driver=kvm2  --container-runtime=crio: (1m3.945698594s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (63.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-995759 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-995759 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.916043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-678320 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-678320 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (125.2235ms)

                                                
                                                
-- stdout --
	* [false-678320] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18333
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0308 04:03:18.674361  954507 out.go:291] Setting OutFile to fd 1 ...
	I0308 04:03:18.674537  954507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:03:18.674551  954507 out.go:304] Setting ErrFile to fd 2...
	I0308 04:03:18.674557  954507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 04:03:18.674851  954507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18333-911675/.minikube/bin
	I0308 04:03:18.675516  954507 out.go:298] Setting JSON to false
	I0308 04:03:18.676614  954507 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27925,"bootTime":1709842674,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0308 04:03:18.677077  954507 start.go:139] virtualization: kvm guest
	I0308 04:03:18.680168  954507 out.go:177] * [false-678320] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0308 04:03:18.682356  954507 notify.go:220] Checking for updates...
	I0308 04:03:18.682370  954507 out.go:177]   - MINIKUBE_LOCATION=18333
	I0308 04:03:18.683777  954507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 04:03:18.685334  954507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18333-911675/kubeconfig
	I0308 04:03:18.686667  954507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18333-911675/.minikube
	I0308 04:03:18.687980  954507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0308 04:03:18.689249  954507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 04:03:18.691193  954507 config.go:182] Loaded profile config "cert-expiration-401581": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:03:18.691339  954507 config.go:182] Loaded profile config "cert-options-576568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0308 04:03:18.691481  954507 config.go:182] Loaded profile config "kubernetes-upgrade-219954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0308 04:03:18.691597  954507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 04:03:18.729889  954507 out.go:177] * Using the kvm2 driver based on user configuration
	I0308 04:03:18.731460  954507 start.go:297] selected driver: kvm2
	I0308 04:03:18.731473  954507 start.go:901] validating driver "kvm2" against <nil>
	I0308 04:03:18.731497  954507 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 04:03:18.733599  954507 out.go:177] 
	W0308 04:03:18.735079  954507 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0308 04:03:18.736375  954507 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-678320 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.84:8443
name: cert-expiration-401581
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.107:8443
name: kubernetes-upgrade-219954
contexts:
- context:
cluster: cert-expiration-401581
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-401581
name: cert-expiration-401581
- context:
cluster: kubernetes-upgrade-219954
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-219954
name: kubernetes-upgrade-219954
current-context: kubernetes-upgrade-219954
kind: Config
preferences: {}
users:
- name: cert-expiration-401581
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.key
- name: kubernetes-upgrade-219954
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-678320

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-678320"

                                                
                                                
----------------------- debugLogs end: false-678320 [took: 3.557803824s] --------------------------------
helpers_test.go:175: Cleaning up "false-678320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-678320
--- PASS: TestNetworkPlugins/group/false (3.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (143.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-477676 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-477676 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m23.39974575s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (143.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (127.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-416634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-416634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m7.249143086s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (127.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-477676 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [335b3350-ef43-4e89-8ea5-b91db6db6313] Pending
helpers_test.go:344: "busybox" [335b3350-ef43-4e89-8ea5-b91db6db6313] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [335b3350-ef43-4e89-8ea5-b91db6db6313] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004889579s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-477676 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-968261 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-968261 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (59.609880926s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-477676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-477676 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-416634 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f4ce37a-eea2-4c43-95a8-57efc013ff82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f4ce37a-eea2-4c43-95a8-57efc013ff82] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.00528444s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-416634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-416634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-416634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026733761s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-416634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [285ff49b-6aad-46e0-b83e-1f5e7526dc8e] Pending
helpers_test.go:344: "busybox" [285ff49b-6aad-46e0-b83e-1f5e7526dc8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [285ff49b-6aad-46e0-b83e-1f5e7526dc8e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.005248048s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-968261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-968261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012268666s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-968261 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (703.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-477676 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-477676 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m43.655016133s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-477676 -n no-preload-477676
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (703.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (611.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-416634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-416634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m11.600440317s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-416634 -n embed-certs-416634
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (611.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-968261 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-968261 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m57.608224919s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-968261 -n default-k8s-diff-port-968261
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-496808 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-496808 --alsologtostderr -v=3: (3.300003159s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-496808 -n old-k8s-version-496808: exit status 7 (72.643686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-496808 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-525359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-525359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m3.258996897s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (119.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m59.469599908s)
--- PASS: TestNetworkPlugins/group/auto/Start (119.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m36.747303978s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-525359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-525359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.027554796s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-525359 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-525359 --alsologtostderr -v=3: (8.420476063s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-525359 -n newest-cni-525359
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-525359 -n newest-cni-525359: exit status 7 (85.990895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-525359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (54.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-525359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0308 04:35:55.055525  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-525359 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (53.889236172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-525359 -n newest-cni-525359
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (54.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5n4gz" [16b8b33a-2dbd-420e-8efd-aa2a0727192d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5n4gz" [16b8b33a-2dbd-420e-8efd-aa2a0727192d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004588666s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bscg5" [67e804c3-56a9-4a15-9b0a-c27ed1318ae4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.010139543s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-525359 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-525359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-525359 -n newest-cni-525359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-525359 -n newest-cni-525359: exit status 2 (264.766391ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-525359 -n newest-cni-525359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-525359 -n newest-cni-525359: exit status 2 (269.030854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-525359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-525359 -n newest-cni-525359
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-525359 -n newest-cni-525359
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.187880604s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dpcq2" [88706929-3f4a-409b-bcb6-7ee26c7dbd0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dpcq2" [88706929-3f4a-409b-bcb6-7ee26c7dbd0c] Running
E0308 04:36:30.456630  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.461915  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.472283  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.492700  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.533001  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.614119  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:30.775283  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:31.096302  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:36:31.737483  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005752101s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-678320 exec deployment/netcat -- nslookup kubernetes.default
E0308 04:36:33.017654  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (93.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m33.168364124s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (93.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (145.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0308 04:36:40.699404  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m25.159337412s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (145.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (140.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0308 04:36:50.940131  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:37:11.420477  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:37:33.567339  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.572721  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.583018  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.603328  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.643637  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.724710  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:33.885107  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:34.205802  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:34.846449  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:36.126768  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:38.687307  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:43.808113  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
E0308 04:37:52.008382  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/functional-576754/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m20.353206067s)
--- PASS: TestNetworkPlugins/group/flannel/Start (140.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l8fq2" [e0e189a6-7890-4009-b0c5-df5104364cd7] Running
E0308 04:37:52.380957  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
E0308 04:37:54.048817  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006648698s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jnwkv" [5c30cd6d-386e-4287-b0b8-f7d65a024d8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jnwkv" [5c30cd6d-386e-4287-b0b8-f7d65a024d8d] Running
E0308 04:38:07.520513  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.525788  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.536050  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.556312  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.596728  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.677439  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:07.837931  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:08.158353  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:08.799533  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
E0308 04:38:10.079695  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005235959s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dkpd4" [4012a476-d479-4a00-af2d-281680b4d7cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dkpd4" [4012a476-d479-4a00-af2d-281680b4d7cf] Running
E0308 04:38:17.760911  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/old-k8s-version-496808/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005664369s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0308 04:38:32.256952  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/addons-963897/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-678320 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.70699715s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-678320 replace --force -f testdata/netcat-deployment.yaml: (1.035308679s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xrz9p" [b29737f2-3cbf-45bf-91de-d3fa5e4267d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xrz9p" [b29737f2-3cbf-45bf-91de-d3fa5e4267d4] Running
E0308 04:39:14.301172  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/no-preload-477676/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005292915s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j5fbz" [f950fc6a-de43-4195-9f3a-3dd12453c693] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005389836s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-22pkq" [e3f37640-c8e6-4ba2-b21f-cff684807f54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-22pkq" [e3f37640-c8e6-4ba2-b21f-cff684807f54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005306227s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-678320 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-678320 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k9jr5" [040de0f8-1247-4a44-b38d-b1ac7b27f28c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k9jr5" [040de0f8-1247-4a44-b38d-b1ac7b27f28c] Running
E0308 04:40:17.411626  918988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/default-k8s-diff-port-968261/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005250376s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-678320 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-678320 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.15
283 TestNetworkPlugins/group/kubenet 3.54
291 TestNetworkPlugins/group/cilium 4.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-030050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-030050
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-678320 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.84:8443
name: cert-expiration-401581
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.107:8443
name: kubernetes-upgrade-219954
contexts:
- context:
cluster: cert-expiration-401581
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-401581
name: cert-expiration-401581
- context:
cluster: kubernetes-upgrade-219954
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-219954
name: kubernetes-upgrade-219954
current-context: kubernetes-upgrade-219954
kind: Config
preferences: {}
users:
- name: cert-expiration-401581
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.key
- name: kubernetes-upgrade-219954
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-678320

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-678320"

                                                
                                                
----------------------- debugLogs end: kubenet-678320 [took: 3.37802189s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-678320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-678320
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-678320 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-678320" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.84:8443
name: cert-expiration-401581
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18333-911675/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.107:8443
name: kubernetes-upgrade-219954
contexts:
- context:
cluster: cert-expiration-401581
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-401581
name: cert-expiration-401581
- context:
cluster: kubernetes-upgrade-219954
extensions:
- extension:
last-update: Fri, 08 Mar 2024 04:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-219954
name: kubernetes-upgrade-219954
current-context: kubernetes-upgrade-219954
kind: Config
preferences: {}
users:
- name: cert-expiration-401581
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/cert-expiration-401581/client.key
- name: kubernetes-upgrade-219954
user:
client-certificate: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.crt
client-key: /home/jenkins/minikube-integration/18333-911675/.minikube/profiles/kubernetes-upgrade-219954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-678320

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-678320" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678320"

                                                
                                                
----------------------- debugLogs end: cilium-678320 [took: 4.029267276s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-678320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-678320
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard